00:00:00.000 Started by upstream project "autotest-per-patch" build number 132363 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.153 Fetching changes from the remote Git repository 00:00:00.160 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.208 Using shallow fetch with depth 1 00:00:00.208 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.208 > git --version # timeout=10 00:00:00.247 > git --version # 'git version 2.39.2' 00:00:00.247 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.276 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.276 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.162 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.174 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.186 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.186 > git config core.sparsecheckout # timeout=10 00:00:07.197 > git read-tree -mu HEAD # timeout=10 00:00:07.214 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.233 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.233 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.315 [Pipeline] Start of Pipeline 00:00:07.330 [Pipeline] library 00:00:07.332 Loading library shm_lib@master 00:00:07.332 Library shm_lib@master is cached. Copying from home. 00:00:07.354 [Pipeline] node 00:00:07.366 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.367 [Pipeline] { 00:00:07.376 [Pipeline] catchError 00:00:07.377 [Pipeline] { 00:00:07.391 [Pipeline] wrap 00:00:07.402 [Pipeline] { 00:00:07.412 [Pipeline] stage 00:00:07.414 [Pipeline] { (Prologue) 00:00:07.632 [Pipeline] sh 00:00:07.912 + logger -p user.info -t JENKINS-CI 00:00:07.932 [Pipeline] echo 00:00:07.934 Node: WFP8 00:00:07.941 [Pipeline] sh 00:00:08.245 [Pipeline] setCustomBuildProperty 00:00:08.259 [Pipeline] echo 00:00:08.261 Cleanup processes 00:00:08.267 [Pipeline] sh 00:00:08.555 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.555 2634881 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.569 [Pipeline] sh 00:00:08.854 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.854 ++ grep -v 'sudo pgrep' 00:00:08.854 ++ awk '{print $1}' 00:00:08.854 + sudo kill -9 00:00:08.854 + true 00:00:08.869 [Pipeline] cleanWs 00:00:08.880 [WS-CLEANUP] Deleting project workspace... 00:00:08.880 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.887 [WS-CLEANUP] done 00:00:08.891 [Pipeline] setCustomBuildProperty 00:00:08.906 [Pipeline] sh 00:00:09.189 + sudo git config --global --replace-all safe.directory '*' 00:00:09.303 [Pipeline] httpRequest 00:00:09.656 [Pipeline] echo 00:00:09.658 Sorcerer 10.211.164.20 is alive 00:00:09.668 [Pipeline] retry 00:00:09.670 [Pipeline] { 00:00:09.685 [Pipeline] httpRequest 00:00:09.689 HttpMethod: GET 00:00:09.690 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.690 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.693 Response Code: HTTP/1.1 200 OK 00:00:09.693 Success: Status code 200 is in the accepted range: 200,404 00:00:09.694 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.939 [Pipeline] } 00:00:10.957 [Pipeline] // retry 00:00:10.965 [Pipeline] sh 00:00:11.249 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.267 [Pipeline] httpRequest 00:00:11.643 [Pipeline] echo 00:00:11.645 Sorcerer 10.211.164.20 is alive 00:00:11.655 [Pipeline] retry 00:00:11.657 [Pipeline] { 00:00:11.672 [Pipeline] httpRequest 00:00:11.677 HttpMethod: GET 00:00:11.677 URL: http://10.211.164.20/packages/spdk_27a4d33d814bb10b620f794184551bc97112e236.tar.gz 00:00:11.678 Sending request to url: http://10.211.164.20/packages/spdk_27a4d33d814bb10b620f794184551bc97112e236.tar.gz 00:00:11.691 Response Code: HTTP/1.1 200 OK 00:00:11.691 Success: Status code 200 is in the accepted range: 200,404 00:00:11.692 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_27a4d33d814bb10b620f794184551bc97112e236.tar.gz 00:00:53.698 [Pipeline] } 00:00:53.716 [Pipeline] // retry 00:00:53.724 [Pipeline] sh 00:00:54.012 + tar --no-same-owner -xf spdk_27a4d33d814bb10b620f794184551bc97112e236.tar.gz 00:00:56.562 [Pipeline] sh 00:00:56.850 + git -C spdk log --oneline -n5 00:00:56.850 27a4d33d8 test/common: [TEST] Make sure get_zoned_devs() picks all namespaces 00:00:56.850 6bb7b0f7b test/nvme/interrupt: Verify pre|post IO cpu load 00:00:56.850 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:00:56.850 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:00:56.850 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:56.862 [Pipeline] } 00:00:56.878 [Pipeline] // stage 00:00:56.888 [Pipeline] stage 00:00:56.890 [Pipeline] { (Prepare) 00:00:56.906 [Pipeline] writeFile 00:00:56.921 [Pipeline] sh 00:00:57.206 + logger -p user.info -t JENKINS-CI 00:00:57.220 [Pipeline] sh 00:00:57.505 + logger -p user.info -t JENKINS-CI 00:00:57.518 [Pipeline] sh 00:00:57.804 + cat autorun-spdk.conf 00:00:57.804 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.804 SPDK_TEST_NVMF=1 00:00:57.804 SPDK_TEST_NVME_CLI=1 00:00:57.804 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.804 SPDK_TEST_NVMF_NICS=e810 00:00:57.804 SPDK_TEST_VFIOUSER=1 00:00:57.804 SPDK_RUN_UBSAN=1 00:00:57.804 NET_TYPE=phy 00:00:57.813 RUN_NIGHTLY=0 00:00:57.818 [Pipeline] readFile 00:00:57.846 [Pipeline] withEnv 00:00:57.848 [Pipeline] { 00:00:57.861 [Pipeline] sh 00:00:58.147 + set -ex 00:00:58.147 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:58.147 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:58.147 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.147 ++ SPDK_TEST_NVMF=1 00:00:58.147 ++ SPDK_TEST_NVME_CLI=1 00:00:58.147 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.147 ++ SPDK_TEST_NVMF_NICS=e810 00:00:58.147 ++ SPDK_TEST_VFIOUSER=1 00:00:58.147 ++ SPDK_RUN_UBSAN=1 00:00:58.147 ++ NET_TYPE=phy 00:00:58.147 ++ RUN_NIGHTLY=0 00:00:58.147 + case $SPDK_TEST_NVMF_NICS in 00:00:58.147 + DRIVERS=ice 00:00:58.147 + [[ tcp == \r\d\m\a ]] 00:00:58.147 + [[ -n ice ]] 00:00:58.147 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:58.147 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:58.147 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:58.147 rmmod: ERROR: Module irdma is not currently loaded 00:00:58.147 rmmod: ERROR: Module i40iw is not currently loaded 00:00:58.147 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:58.147 + true 00:00:58.147 + for D in $DRIVERS 00:00:58.147 + sudo modprobe ice 00:00:58.147 + exit 0 00:00:58.157 [Pipeline] } 00:00:58.173 [Pipeline] // withEnv 00:00:58.179 [Pipeline] } 00:00:58.194 [Pipeline] // stage 00:00:58.205 [Pipeline] catchError 00:00:58.207 [Pipeline] { 00:00:58.223 [Pipeline] timeout 00:00:58.223 Timeout set to expire in 1 hr 0 min 00:00:58.225 [Pipeline] { 00:00:58.240 [Pipeline] stage 00:00:58.242 [Pipeline] { (Tests) 00:00:58.258 [Pipeline] sh 00:00:58.546 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.547 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.547 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.547 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:58.547 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.547 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.547 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:58.547 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.547 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:58.547 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:58.547 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:58.547 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:58.547 + source /etc/os-release 00:00:58.547 ++ NAME='Fedora Linux' 00:00:58.547 ++ VERSION='39 (Cloud Edition)' 00:00:58.547 ++ ID=fedora 00:00:58.547 ++ VERSION_ID=39 00:00:58.547 ++ VERSION_CODENAME= 00:00:58.547 ++ PLATFORM_ID=platform:f39 00:00:58.547 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:58.547 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:58.547 ++ LOGO=fedora-logo-icon 00:00:58.547 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:58.547 ++ HOME_URL=https://fedoraproject.org/ 00:00:58.547 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:58.547 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:58.547 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:58.547 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:58.547 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:58.547 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:58.547 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:58.547 ++ SUPPORT_END=2024-11-12 00:00:58.547 ++ VARIANT='Cloud Edition' 00:00:58.547 ++ VARIANT_ID=cloud 00:00:58.547 + uname -a 00:00:58.547 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:58.547 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:01.088 Hugepages 00:01:01.088 node hugesize free / total 00:01:01.088 node0 1048576kB 0 / 0 00:01:01.088 node0 2048kB 0 / 0 00:01:01.088 node1 1048576kB 0 / 0 00:01:01.088 node1 2048kB 0 / 0 00:01:01.088 00:01:01.088 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.088 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:01.088 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:01.088 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:01.088 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:01.088 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:01.088 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:01.088 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:01.088 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:01.088 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:01.088 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:01.088 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:01.088 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:01.088 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:01.088 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:01.088 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:01.088 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:01.088 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:01.088 + rm -f /tmp/spdk-ld-path 00:01:01.088 + source autorun-spdk.conf 00:01:01.088 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.088 ++ SPDK_TEST_NVMF=1 00:01:01.088 ++ SPDK_TEST_NVME_CLI=1 00:01:01.088 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.088 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.088 ++ SPDK_TEST_VFIOUSER=1 00:01:01.088 ++ SPDK_RUN_UBSAN=1 00:01:01.088 ++ NET_TYPE=phy 00:01:01.088 ++ RUN_NIGHTLY=0 00:01:01.088 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.088 + [[ -n '' ]] 00:01:01.088 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.088 + for M in /var/spdk/build-*-manifest.txt 00:01:01.088 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:01.088 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.088 + for M in /var/spdk/build-*-manifest.txt 00:01:01.088 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.088 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.088 + for M in /var/spdk/build-*-manifest.txt 00:01:01.088 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.088 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.088 ++ uname 00:01:01.088 + [[ Linux == \L\i\n\u\x ]] 00:01:01.088 + sudo dmesg -T 00:01:01.348 + sudo dmesg --clear 00:01:01.348 + dmesg_pid=2636319 00:01:01.348 + [[ Fedora Linux == FreeBSD ]] 00:01:01.348 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.348 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.348 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.348 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.348 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.348 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.348 + sudo dmesg -Tw 00:01:01.348 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.348 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.348 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.348 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.348 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.348 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.348 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.348 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.348 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.348 09:33:24 -- common/autotest_common.sh@1702 -- $ [[ n == y ]] 00:01:01.349 09:33:24 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.349 09:33:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.349 09:33:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:01.349 09:33:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:01.349 09:33:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.349 09:33:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:01.349 09:33:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:01.349 09:33:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:01.349 09:33:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:01.349 09:33:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:01.349 09:33:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:01.349 09:33:24 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.349 09:33:24 -- common/autotest_common.sh@1702 -- $ [[ n == y ]] 00:01:01.349 09:33:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.349 09:33:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:01.349 09:33:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.349 09:33:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.349 09:33:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.349 09:33:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.349 09:33:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.349 09:33:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.349 09:33:24 -- paths/export.sh@5 -- $ export PATH 00:01:01.349 09:33:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.349 09:33:24 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.349 09:33:24 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:01.349 09:33:24 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732091604.XXXXXX 00:01:01.349 09:33:24 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732091604.Va8zXb 00:01:01.349 09:33:24 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:01.349 09:33:24 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:01.349 09:33:24 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.349 09:33:24 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.349 09:33:24 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.349 09:33:24 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:01.349 09:33:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:01.349 09:33:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.349 09:33:24 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:01.349 09:33:24 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:01.349 09:33:24 -- pm/common@17 -- $ local monitor 00:01:01.349 09:33:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.349 09:33:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.349 09:33:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.349 09:33:24 -- pm/common@21 -- $ date +%s 00:01:01.349 09:33:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.349 09:33:24 -- pm/common@21 -- $ date +%s 00:01:01.349 09:33:24 -- pm/common@25 -- $ sleep 1 00:01:01.349 09:33:24 -- pm/common@21 -- $ date +%s 00:01:01.349 09:33:24 -- pm/common@21 -- $ date +%s 00:01:01.349 09:33:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091604 00:01:01.349 09:33:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091604 00:01:01.349 09:33:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091604 00:01:01.349 09:33:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732091604 00:01:01.609 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091604_collect-cpu-load.pm.log 00:01:01.609 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091604_collect-vmstat.pm.log 00:01:01.609 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091604_collect-cpu-temp.pm.log 00:01:01.609 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732091604_collect-bmc-pm.bmc.pm.log 00:01:02.548 09:33:25 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:02.548 09:33:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.548 09:33:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.548 09:33:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.548 09:33:25 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.548 Wed Nov 20 08:33:25 AM UTC 2024 00:01:02.548 09:33:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.548 v25.01-pre-201-g27a4d33d8 00:01:02.548 09:33:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.548 09:33:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.548 09:33:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.548 09:33:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:02.548 09:33:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:02.548 09:33:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.548 ************************************ 00:01:02.548 START TEST ubsan 00:01:02.548 ************************************ 00:01:02.548 09:33:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:02.548 using ubsan 00:01:02.548 00:01:02.548 real 0m0.000s 00:01:02.548 user 0m0.000s 00:01:02.548 sys 0m0.000s 00:01:02.548 09:33:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:02.548 09:33:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.548 ************************************ 00:01:02.548 END TEST ubsan 00:01:02.548 ************************************ 00:01:02.548 09:33:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.548 09:33:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.548 09:33:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.548 09:33:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.548 09:33:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.548 09:33:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.548 09:33:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.548 09:33:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.548 09:33:25 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:02.808 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:02.808 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:03.067 Using 'verbs' RDMA provider 00:01:16.221 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:28.488 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:28.488 Creating mk/config.mk...done. 00:01:28.488 Creating mk/cc.flags.mk...done. 00:01:28.488 Type 'make' to build. 00:01:28.488 09:33:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:28.488 09:33:51 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:28.488 09:33:51 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:28.488 09:33:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.488 ************************************ 00:01:28.488 START TEST make 00:01:28.488 ************************************ 00:01:28.488 09:33:51 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:28.488 make[1]: Nothing to be done for 'all'. 00:01:29.871 The Meson build system 00:01:29.871 Version: 1.5.0 00:01:29.871 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:29.871 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.871 Build type: native build 00:01:29.871 Project name: libvfio-user 00:01:29.871 Project version: 0.0.1 00:01:29.871 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:29.871 C linker for the host machine: cc ld.bfd 2.40-14 00:01:29.871 Host machine cpu family: x86_64 00:01:29.871 Host machine cpu: x86_64 00:01:29.871 Run-time dependency threads found: YES 00:01:29.871 Library dl found: YES 00:01:29.871 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:29.871 Run-time dependency json-c found: YES 0.17 00:01:29.871 Run-time dependency cmocka found: YES 1.1.7 00:01:29.871 Program pytest-3 found: NO 00:01:29.871 Program flake8 found: NO 00:01:29.871 Program misspell-fixer found: NO 00:01:29.871 Program restructuredtext-lint found: NO 00:01:29.871 Program valgrind found: YES (/usr/bin/valgrind) 00:01:29.871 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.871 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.871 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.871 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.871 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:29.871 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:29.871 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.871 Build targets in project: 8 00:01:29.871 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:29.871 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:29.871 00:01:29.871 libvfio-user 0.0.1 00:01:29.871 00:01:29.871 User defined options 00:01:29.871 buildtype : debug 00:01:29.871 default_library: shared 00:01:29.871 libdir : /usr/local/lib 00:01:29.871 00:01:29.871 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.441 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.441 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:30.441 [2/37] Compiling C object samples/null.p/null.c.o 00:01:30.441 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:30.441 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:30.441 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:30.441 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:30.441 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:30.441 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:30.441 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:30.441 [10/37] Compiling C object samples/server.p/server.c.o 00:01:30.699 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:30.699 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:30.699 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:30.699 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:30.699 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:30.699 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:30.699 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:30.699 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:30.699 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:30.699 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:30.699 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:30.699 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:30.699 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:30.699 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:30.699 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:30.699 [26/37] Compiling C object samples/client.p/client.c.o 00:01:30.699 [27/37] Linking target samples/client 00:01:30.699 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:30.699 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:30.699 [30/37] Linking target test/unit_tests 00:01:30.699 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:30.958 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:30.958 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:30.958 [34/37] Linking target samples/server 00:01:30.958 [35/37] Linking target samples/null 00:01:30.958 [36/37] Linking target samples/lspci 00:01:30.958 [37/37] Linking target samples/gpio-pci-idio-16 00:01:30.958 INFO: autodetecting backend as ninja 00:01:30.958 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.958 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.218 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.476 ninja: no work to do. 00:01:36.754 The Meson build system 00:01:36.754 Version: 1.5.0 00:01:36.754 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:36.754 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:36.754 Build type: native build 00:01:36.754 Program cat found: YES (/usr/bin/cat) 00:01:36.754 Project name: DPDK 00:01:36.754 Project version: 24.03.0 00:01:36.754 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.754 C linker for the host machine: cc ld.bfd 2.40-14 00:01:36.754 Host machine cpu family: x86_64 00:01:36.754 Host machine cpu: x86_64 00:01:36.754 Message: ## Building in Developer Mode ## 00:01:36.754 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.754 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:36.754 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.754 Program python3 found: YES (/usr/bin/python3) 00:01:36.754 Program cat found: YES (/usr/bin/cat) 00:01:36.754 Compiler for C supports arguments -march=native: YES 00:01:36.755 Checking for size of "void *" : 8 00:01:36.755 Checking for size of "void *" : 8 (cached) 00:01:36.755 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:36.755 Library m found: YES 00:01:36.755 Library numa found: YES 00:01:36.755 Has header "numaif.h" : YES 00:01:36.755 Library fdt found: NO 00:01:36.755 Library execinfo found: NO 00:01:36.755 Has header "execinfo.h" : YES 00:01:36.755 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.755 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.755 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.755 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.755 Run-time dependency openssl found: YES 3.1.1 00:01:36.755 Run-time dependency libpcap found: YES 1.10.4 00:01:36.755 Has header "pcap.h" with dependency libpcap: YES 00:01:36.755 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.755 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.755 Compiler for C supports arguments -Wformat: YES 00:01:36.755 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.755 Compiler for C supports arguments -Wformat-security: NO 00:01:36.755 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.755 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.755 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.755 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.755 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.755 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.755 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.755 Compiler for C supports arguments -Wundef: YES 00:01:36.755 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.755 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.755 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.755 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.755 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.755 Program objdump found: YES (/usr/bin/objdump) 00:01:36.755 Compiler for C supports arguments -mavx512f: YES 00:01:36.755 Checking if "AVX512 checking" compiles: YES 00:01:36.755 Fetching value of define "__SSE4_2__" : 1 00:01:36.755 Fetching value of define "__AES__" : 1 00:01:36.755 Fetching value of define "__AVX__" : 1 00:01:36.755 Fetching value of define "__AVX2__" : 1 00:01:36.755 Fetching value of define "__AVX512BW__" : 1 00:01:36.755 Fetching value of define "__AVX512CD__" : 1 00:01:36.755 Fetching value of define "__AVX512DQ__" : 1 00:01:36.755 Fetching value of define "__AVX512F__" : 1 00:01:36.755 Fetching value of define "__AVX512VL__" : 1 00:01:36.755 Fetching value of define "__PCLMUL__" : 1 00:01:36.755 Fetching value of define "__RDRND__" : 1 00:01:36.755 Fetching value of define "__RDSEED__" : 1 00:01:36.755 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.755 Fetching value of define "__znver1__" : (undefined) 00:01:36.755 Fetching value of define "__znver2__" : (undefined) 00:01:36.755 Fetching value of define "__znver3__" : (undefined) 00:01:36.755 Fetching value of define "__znver4__" : (undefined) 00:01:36.755 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.755 Message: lib/log: Defining dependency "log" 00:01:36.755 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.755 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.755 Checking for function "getentropy" : NO 00:01:36.755 Message: lib/eal: Defining dependency "eal" 00:01:36.755 Message: lib/ring: Defining dependency "ring" 00:01:36.755 Message: lib/rcu: Defining dependency "rcu" 00:01:36.755 Message: lib/mempool: Defining dependency "mempool" 00:01:36.755 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.755 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.755 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:36.755 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.755 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.755 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:36.755 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:36.755 Compiler for C supports arguments -mpclmul: YES 00:01:36.755 Compiler for C supports arguments -maes: YES 00:01:36.755 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.755 Compiler for C supports arguments -mavx512bw: YES 00:01:36.755 Compiler for C supports arguments -mavx512dq: YES 00:01:36.755 Compiler for C supports arguments -mavx512vl: YES 00:01:36.755 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.755 Compiler for C supports arguments -mavx2: YES 00:01:36.755 Compiler for C supports arguments -mavx: YES 00:01:36.755 Message: lib/net: Defining dependency "net" 00:01:36.755 Message: lib/meter: Defining dependency "meter" 00:01:36.755 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.755 Message: lib/pci: Defining dependency "pci" 00:01:36.755 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.755 Message: lib/hash: Defining dependency "hash" 00:01:36.755 Message: lib/timer: Defining dependency "timer" 00:01:36.755 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.755 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.755 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.755 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.755 Message: lib/power: Defining dependency "power" 00:01:36.755 Message: lib/reorder: Defining dependency "reorder" 00:01:36.755 Message: lib/security: Defining dependency "security" 00:01:36.755 Has header "linux/userfaultfd.h" : YES 00:01:36.755 Has header "linux/vduse.h" : YES 00:01:36.755 Message: lib/vhost: Defining dependency "vhost" 00:01:36.755 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.755 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.755 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.755 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.755 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:36.755 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:36.755 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:36.755 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:36.755 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:36.755 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:36.755 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:36.755 Configuring doxy-api-html.conf using configuration 00:01:36.755 Configuring doxy-api-man.conf using configuration 00:01:36.755 Program mandb found: YES (/usr/bin/mandb) 00:01:36.755 Program sphinx-build found: NO 00:01:36.755 Configuring rte_build_config.h using configuration 00:01:36.755 Message: 00:01:36.755 ================= 00:01:36.755 Applications Enabled 00:01:36.755 ================= 00:01:36.755 00:01:36.755 apps: 00:01:36.755 00:01:36.755 00:01:36.755 Message: 00:01:36.755 ================= 00:01:36.755 Libraries Enabled 00:01:36.755 ================= 00:01:36.755 00:01:36.755 libs: 00:01:36.755 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:36.755 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:36.755 cryptodev, dmadev, power, reorder, security, vhost, 00:01:36.755 00:01:36.755 Message: 00:01:36.755 =============== 00:01:36.755 Drivers Enabled 00:01:36.755 =============== 00:01:36.755 00:01:36.755 common: 00:01:36.755 00:01:36.755 bus: 00:01:36.755 pci, vdev, 00:01:36.755 mempool: 00:01:36.755 ring, 00:01:36.755 dma: 00:01:36.755 00:01:36.755 net: 00:01:36.755 00:01:36.755 crypto: 00:01:36.755 00:01:36.755 compress: 00:01:36.755 00:01:36.755 vdpa: 00:01:36.755 00:01:36.755 00:01:36.755 Message: 00:01:36.755 ================= 00:01:36.755 Content Skipped 00:01:36.755 ================= 00:01:36.755 00:01:36.755 apps: 00:01:36.755 dumpcap: explicitly disabled via build config 00:01:36.755 graph: explicitly disabled via build config 00:01:36.755 pdump: explicitly disabled via build config 00:01:36.755 proc-info: explicitly disabled via build config 00:01:36.755 test-acl: explicitly disabled via build config 00:01:36.755 test-bbdev: explicitly disabled via build config 00:01:36.755 test-cmdline: explicitly disabled via build config 00:01:36.755 test-compress-perf: explicitly disabled via build config 00:01:36.755 test-crypto-perf: explicitly disabled via build config 00:01:36.755 test-dma-perf: explicitly disabled via build config 00:01:36.755 test-eventdev: explicitly disabled via build config 00:01:36.755 test-fib: explicitly disabled via build config 00:01:36.755 test-flow-perf: explicitly disabled via build config 00:01:36.755 test-gpudev: explicitly disabled via build config 00:01:36.755 test-mldev: explicitly disabled via build config 00:01:36.755 test-pipeline: explicitly disabled via build config 00:01:36.755 test-pmd: explicitly disabled via build config 00:01:36.755 test-regex: explicitly disabled via build config 00:01:36.755 test-sad: explicitly disabled via build config 00:01:36.755 test-security-perf: explicitly disabled via build config 00:01:36.755 00:01:36.755 libs: 00:01:36.755 argparse: explicitly disabled via build config 00:01:36.755 metrics: explicitly disabled via build config 00:01:36.755 acl: explicitly disabled via build config 00:01:36.755 bbdev: explicitly disabled via build config 00:01:36.755 bitratestats: explicitly disabled via build config 00:01:36.755 bpf: explicitly disabled via build config 00:01:36.755 cfgfile: explicitly disabled via build config 00:01:36.755 distributor: explicitly disabled via build config 00:01:36.755 efd: explicitly disabled via build config 00:01:36.755 eventdev: explicitly disabled via build config 00:01:36.755 dispatcher: explicitly disabled via build config 00:01:36.755 gpudev: explicitly disabled via build config 00:01:36.755 gro: explicitly disabled via build config 00:01:36.755 gso: explicitly disabled via build config 00:01:36.756 ip_frag: explicitly disabled via build config 00:01:36.756 jobstats: explicitly disabled via build config 00:01:36.756 latencystats: explicitly disabled via build config 00:01:36.756 lpm: explicitly disabled via build config 00:01:36.756 member: explicitly disabled via build config 00:01:36.756 pcapng: explicitly disabled via build config 00:01:36.756 rawdev: explicitly disabled via build config 00:01:36.756 regexdev: explicitly disabled via build config 00:01:36.756 mldev: explicitly disabled via build config 00:01:36.756 rib: explicitly disabled via build config 00:01:36.756 sched: explicitly disabled via build config 00:01:36.756 stack: explicitly disabled via build config 00:01:36.756 ipsec: explicitly disabled via build config 00:01:36.756 pdcp: explicitly disabled via build config 00:01:36.756 fib: explicitly disabled via build config 00:01:36.756 port: explicitly disabled via build config 00:01:36.756 pdump: explicitly disabled via build config 00:01:36.756 table: explicitly disabled via build config 00:01:36.756 pipeline: explicitly disabled via build config 00:01:36.756 graph: explicitly disabled via build config 00:01:36.756 node: explicitly disabled via build config 00:01:36.756 00:01:36.756 drivers: 00:01:36.756 common/cpt: not in enabled drivers build config 00:01:36.756 common/dpaax: not in enabled drivers build config 00:01:36.756 common/iavf: not in enabled drivers build config 00:01:36.756 common/idpf: not in enabled drivers build config 00:01:36.756 common/ionic: not in enabled drivers build config 00:01:36.756 common/mvep: not in enabled drivers build config 00:01:36.756 common/octeontx: not in enabled drivers build config 00:01:36.756 bus/auxiliary: not in enabled drivers build config 00:01:36.756 bus/cdx: not in enabled drivers build config 00:01:36.756 bus/dpaa: not in enabled drivers build config 00:01:36.756 bus/fslmc: not in enabled drivers build config 00:01:36.756 bus/ifpga: not in enabled drivers build config 00:01:36.756 bus/platform: not in enabled drivers build config 00:01:36.756 bus/uacce: not in enabled drivers build config 00:01:36.756 bus/vmbus: not in enabled drivers build config 00:01:36.756 common/cnxk: not in enabled drivers build config 00:01:36.756 common/mlx5: not in enabled drivers build config 00:01:36.756 common/nfp: not in enabled drivers build config 00:01:36.756 common/nitrox: not in enabled drivers build config 00:01:36.756 common/qat: not in enabled drivers build config 00:01:36.756 common/sfc_efx: not in enabled drivers build config 00:01:36.756 mempool/bucket: not in enabled drivers build config 00:01:36.756 mempool/cnxk: not in enabled drivers build config 00:01:36.756 mempool/dpaa: not in enabled drivers build config 00:01:36.756 mempool/dpaa2: not in enabled drivers build config 00:01:36.756 mempool/octeontx: not in enabled drivers build config 00:01:36.756 mempool/stack: not in enabled drivers build config 00:01:36.756 dma/cnxk: not in enabled drivers build config 00:01:36.756 dma/dpaa: not in enabled drivers build config 00:01:36.756 dma/dpaa2: not in enabled drivers build config 00:01:36.756 dma/hisilicon: not in enabled drivers build config 00:01:36.756 dma/idxd: not in enabled drivers build config 00:01:36.756 dma/ioat: not in enabled drivers build config 00:01:36.756 dma/skeleton: not in enabled drivers build config 00:01:36.756 net/af_packet: not in enabled drivers build config 00:01:36.756 net/af_xdp: not in enabled drivers build config 00:01:36.756 net/ark: not in enabled drivers build config 00:01:36.756 net/atlantic: not in enabled drivers build config 00:01:36.756 net/avp: not in enabled drivers build config 00:01:36.756 net/axgbe: not in enabled drivers build config 00:01:36.756 net/bnx2x: not in enabled drivers build config 00:01:36.756 net/bnxt: not in enabled drivers build config 00:01:36.756 net/bonding: not in enabled drivers build config 00:01:36.756 net/cnxk: not in enabled drivers build config 00:01:36.756 net/cpfl: not in enabled drivers build config 00:01:36.756 net/cxgbe: not in enabled drivers build config 00:01:36.756 net/dpaa: not in enabled drivers build config 00:01:36.756 net/dpaa2: not in enabled drivers build config 00:01:36.756 net/e1000: not in enabled drivers build config 00:01:36.756 net/ena: not in enabled drivers build config 00:01:36.756 net/enetc: not in enabled drivers build config 00:01:36.756 net/enetfec: not in enabled drivers build config 00:01:36.756 net/enic: not in enabled drivers build config 00:01:36.756 net/failsafe: not in enabled drivers build config 00:01:36.756 net/fm10k: not in enabled drivers build config 00:01:36.756 net/gve: not in enabled drivers build config 00:01:36.756 net/hinic: not in enabled drivers build config 00:01:36.756 net/hns3: not in enabled drivers build config 00:01:36.756 net/i40e: not in enabled drivers build config 00:01:36.756 net/iavf: not in enabled drivers build config 00:01:36.756 net/ice: not in enabled drivers build config 00:01:36.756 net/idpf: not in enabled drivers build config 00:01:36.756 net/igc: not in enabled drivers build config 00:01:36.756 net/ionic: not in enabled drivers build config 00:01:36.756 net/ipn3ke: not in enabled drivers build config 00:01:36.756 net/ixgbe: not in enabled drivers build config 00:01:36.756 net/mana: not in enabled drivers build config 00:01:36.756 net/memif: not in enabled drivers build config 00:01:36.756 net/mlx4: not in enabled drivers build config 00:01:36.756 net/mlx5: not in enabled drivers build config 00:01:36.756 net/mvneta: not in enabled drivers build config 00:01:36.756 net/mvpp2: not in enabled drivers build config 00:01:36.756 net/netvsc: not in enabled drivers build config 00:01:36.756 net/nfb: not in enabled drivers build config 00:01:36.756 net/nfp: not in enabled drivers build config 00:01:36.756 net/ngbe: not in enabled drivers build config 00:01:36.756 net/null: not in enabled drivers build config 00:01:36.756 net/octeontx: not in enabled drivers build config 00:01:36.756 net/octeon_ep: not in enabled drivers build config 00:01:36.756 net/pcap: not in enabled drivers build config 00:01:36.756 net/pfe: not in enabled drivers build config 00:01:36.756 net/qede: not in enabled drivers build config 00:01:36.756 net/ring: not in enabled drivers build config 00:01:36.756 net/sfc: not in enabled drivers build config 00:01:36.756 net/softnic: not in enabled drivers build config 00:01:36.756 net/tap: not in enabled drivers build config 00:01:36.756 net/thunderx: not in enabled drivers build config 00:01:36.756 net/txgbe: not in enabled drivers build config 00:01:36.756 net/vdev_netvsc: not in enabled drivers build config 00:01:36.756 net/vhost: not in enabled drivers build config 00:01:36.756 net/virtio: not in enabled drivers build config 00:01:36.756 net/vmxnet3: not in enabled drivers build config 00:01:36.756 raw/*: missing internal dependency, "rawdev" 00:01:36.756 crypto/armv8: not in enabled drivers build config 00:01:36.756 crypto/bcmfs: not in enabled drivers build config 00:01:36.756 crypto/caam_jr: not in enabled drivers build config 00:01:36.756 crypto/ccp: not in enabled drivers build config 00:01:36.756 crypto/cnxk: not in enabled drivers build config 00:01:36.756 crypto/dpaa_sec: not in enabled drivers build config 00:01:36.756 crypto/dpaa2_sec: not in enabled drivers build config 00:01:36.756 crypto/ipsec_mb: not in enabled drivers build config 00:01:36.756 crypto/mlx5: not in enabled drivers build config 00:01:36.756 crypto/mvsam: not in enabled drivers build config 00:01:36.756 crypto/nitrox: not in enabled drivers build config 00:01:36.756 crypto/null: not in enabled drivers build config 00:01:36.756 crypto/octeontx: not in enabled drivers build config 00:01:36.756 crypto/openssl: not in enabled drivers build config 00:01:36.756 crypto/scheduler: not in enabled drivers build config 00:01:36.756 crypto/uadk: not in enabled drivers build config 00:01:36.756 crypto/virtio: not in enabled drivers build config 00:01:36.756 compress/isal: not in enabled drivers build config 00:01:36.756 compress/mlx5: not in enabled drivers build config 00:01:36.756 compress/nitrox: not in enabled drivers build config 00:01:36.756 compress/octeontx: not in enabled drivers build config 00:01:36.756 compress/zlib: not in enabled drivers build config 00:01:36.756 regex/*: missing internal dependency, "regexdev" 00:01:36.756 ml/*: missing internal dependency, "mldev" 00:01:36.756 vdpa/ifc: not in enabled drivers build config 00:01:36.756 vdpa/mlx5: not in enabled drivers build config 00:01:36.756 vdpa/nfp: not in enabled drivers build config 00:01:36.756 vdpa/sfc: not in enabled drivers build config 00:01:36.756 event/*: missing internal dependency, "eventdev" 00:01:36.756 baseband/*: missing internal dependency, "bbdev" 00:01:36.756 gpu/*: missing internal dependency, "gpudev" 00:01:36.756 00:01:36.756 00:01:36.756 Build targets in project: 85 00:01:36.756 00:01:36.756 DPDK 24.03.0 00:01:36.756 00:01:36.756 User defined options 00:01:36.756 buildtype : debug 00:01:36.756 default_library : shared 00:01:36.756 libdir : lib 00:01:36.756 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:36.756 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:36.756 c_link_args : 00:01:36.756 cpu_instruction_set: native 00:01:36.756 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:36.756 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:36.756 enable_docs : false 00:01:36.756 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:36.756 enable_kmods : false 00:01:36.756 max_lcores : 128 00:01:36.756 tests : false 00:01:36.756 00:01:36.756 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.016 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:37.277 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.277 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.277 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.277 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.277 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.277 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.277 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.277 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.277 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.277 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.277 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.277 [12/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.277 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.277 [14/268] Linking static target lib/librte_kvargs.a 00:01:37.277 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.277 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:37.536 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.536 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.536 [19/268] Linking static target lib/librte_log.a 00:01:37.536 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.536 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:37.536 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:37.536 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.536 [24/268] Linking static target lib/librte_pci.a 00:01:37.536 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.794 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.795 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.795 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.795 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.795 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.795 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.795 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.795 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.795 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.795 [35/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.795 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.795 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:37.795 [38/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:37.795 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.795 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.795 [41/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:37.795 [42/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.795 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:37.795 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.795 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.795 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.795 [47/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:37.795 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.795 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:37.795 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.795 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.795 [52/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.795 [53/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:37.795 [54/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:37.795 [55/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.795 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.795 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:37.795 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.795 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.795 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.795 [61/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:37.795 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.795 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:37.795 [64/268] Linking static target lib/librte_ring.a 00:01:37.795 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.795 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:37.795 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:37.795 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.795 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.795 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.795 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.795 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.795 [73/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.795 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.795 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.795 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.795 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:37.795 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.795 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:37.795 [80/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:37.795 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.795 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:37.795 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.795 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:37.795 [85/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.795 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:37.795 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.795 [88/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:37.795 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.054 [90/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.054 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.054 [92/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.054 [93/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.054 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.054 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.054 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.054 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.054 [98/268] Linking static target lib/librte_telemetry.a 00:01:38.054 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.054 [100/268] Linking static target lib/librte_meter.a 00:01:38.054 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.054 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.054 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:38.054 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:38.054 [105/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.054 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.054 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:38.054 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.054 [109/268] Linking static target lib/librte_mempool.a 00:01:38.054 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:38.054 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.054 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.054 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.054 [114/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.054 [115/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.054 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.054 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:38.054 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:38.054 [119/268] Linking static target lib/librte_net.a 00:01:38.054 [120/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.054 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:38.054 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:38.054 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:38.054 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.054 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:38.054 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:38.054 [127/268] Linking static target lib/librte_eal.a 00:01:38.054 [128/268] Linking static target lib/librte_rcu.a 00:01:38.054 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:38.054 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.054 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:38.054 [132/268] Linking static target lib/librte_cmdline.a 00:01:38.054 [133/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:38.054 [134/268] Linking static target lib/librte_mbuf.a 00:01:38.054 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.054 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:38.054 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.054 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:38.054 [139/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:38.054 [140/268] Linking target lib/librte_log.so.24.1 00:01:38.054 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:38.054 [142/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.313 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:38.313 [144/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.313 [145/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:38.313 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:38.313 [147/268] Linking static target lib/librte_timer.a 00:01:38.313 [148/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:38.313 [149/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.313 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.313 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:38.313 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:38.313 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.313 [154/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.313 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:38.313 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:38.313 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:38.313 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:38.313 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.313 [160/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.313 [161/268] Linking target lib/librte_kvargs.so.24.1 00:01:38.313 [162/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.313 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.313 [164/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.313 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:38.313 [166/268] Linking static target lib/librte_compressdev.a 00:01:38.313 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:38.313 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:38.313 [169/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:38.313 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:38.313 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:38.313 [172/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:38.313 [173/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:38.313 [174/268] Linking static target lib/librte_dmadev.a 00:01:38.313 [175/268] Linking target lib/librte_telemetry.so.24.1 00:01:38.313 [176/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.313 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:38.313 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.313 [179/268] Linking static target lib/librte_security.a 00:01:38.313 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.313 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:38.313 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:38.573 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:38.573 [184/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:38.573 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:38.573 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:38.573 [187/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:38.573 [188/268] Linking static target lib/librte_power.a 00:01:38.573 [189/268] Linking static target lib/librte_reorder.a 00:01:38.573 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:38.573 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:38.573 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:38.573 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:38.573 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:38.573 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:38.573 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.573 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.573 [198/268] Linking static target drivers/librte_bus_vdev.a 00:01:38.573 [199/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.573 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.573 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:38.573 [202/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.573 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.573 [204/268] Linking static target lib/librte_hash.a 00:01:38.573 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.573 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:38.573 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:38.833 [208/268] Linking static target drivers/librte_mempool_ring.a 00:01:38.833 [209/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.833 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.833 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.833 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:38.833 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:38.833 [214/268] Linking static target lib/librte_cryptodev.a 00:01:38.833 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.833 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.093 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.093 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.093 [219/268] Linking static target lib/librte_ethdev.a 00:01:39.093 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.093 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.093 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.093 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.093 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:39.352 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.611 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.612 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.181 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:40.181 [229/268] Linking static target lib/librte_vhost.a 00:01:40.750 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.128 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.559 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.126 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.126 [234/268] Linking target lib/librte_eal.so.24.1 00:01:48.126 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:48.387 [236/268] Linking target lib/librte_ring.so.24.1 00:01:48.387 [237/268] Linking target lib/librte_timer.so.24.1 00:01:48.387 [238/268] Linking target lib/librte_pci.so.24.1 00:01:48.387 [239/268] Linking target lib/librte_meter.so.24.1 00:01:48.387 [240/268] Linking target lib/librte_dmadev.so.24.1 00:01:48.387 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:48.387 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:48.387 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:48.387 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:48.387 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:48.387 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:48.387 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:48.387 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:48.387 [249/268] Linking target lib/librte_rcu.so.24.1 00:01:48.646 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:48.647 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:48.647 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:48.647 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:48.647 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:48.906 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:48.906 [256/268] Linking target lib/librte_net.so.24.1 00:01:48.906 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:01:48.906 [258/268] Linking target lib/librte_compressdev.so.24.1 00:01:48.906 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:48.906 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:48.906 [261/268] Linking target lib/librte_security.so.24.1 00:01:48.906 [262/268] Linking target lib/librte_hash.so.24.1 00:01:48.906 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:48.906 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:49.165 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:49.165 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:49.165 [267/268] Linking target lib/librte_power.so.24.1 00:01:49.165 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:49.165 INFO: autodetecting backend as ninja 00:01:49.165 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:01.380 CC lib/log/log.o 00:02:01.380 CC lib/log/log_flags.o 00:02:01.380 CC lib/log/log_deprecated.o 00:02:01.380 CC lib/ut/ut.o 00:02:01.380 CC lib/ut_mock/mock.o 00:02:01.380 LIB libspdk_ut_mock.a 00:02:01.380 LIB libspdk_ut.a 00:02:01.380 LIB libspdk_log.a 00:02:01.380 SO libspdk_ut_mock.so.6.0 00:02:01.380 SO libspdk_ut.so.2.0 00:02:01.380 SO libspdk_log.so.7.1 00:02:01.380 SYMLINK libspdk_ut_mock.so 00:02:01.380 SYMLINK libspdk_ut.so 00:02:01.380 SYMLINK libspdk_log.so 00:02:01.380 CC lib/ioat/ioat.o 00:02:01.380 CC lib/util/base64.o 00:02:01.380 CC lib/util/bit_array.o 00:02:01.380 CC lib/dma/dma.o 00:02:01.380 CC lib/util/cpuset.o 00:02:01.380 CXX lib/trace_parser/trace.o 00:02:01.380 CC lib/util/crc16.o 00:02:01.380 CC lib/util/crc32.o 00:02:01.380 CC lib/util/crc32c.o 00:02:01.380 CC lib/util/crc32_ieee.o 00:02:01.380 CC lib/util/crc64.o 00:02:01.380 CC lib/util/dif.o 00:02:01.380 CC lib/util/fd.o 00:02:01.380 CC lib/util/fd_group.o 00:02:01.380 CC lib/util/file.o 00:02:01.380 CC lib/util/hexlify.o 00:02:01.380 CC lib/util/iov.o 00:02:01.380 CC lib/util/math.o 00:02:01.380 CC lib/util/net.o 00:02:01.380 CC lib/util/pipe.o 00:02:01.380 CC lib/util/strerror_tls.o 00:02:01.380 CC lib/util/string.o 00:02:01.380 CC lib/util/uuid.o 00:02:01.380 CC lib/util/xor.o 00:02:01.380 CC lib/util/zipf.o 00:02:01.380 CC lib/util/md5.o 00:02:01.380 CC lib/vfio_user/host/vfio_user_pci.o 00:02:01.380 CC lib/vfio_user/host/vfio_user.o 00:02:01.380 LIB libspdk_dma.a 00:02:01.380 SO libspdk_dma.so.5.0 00:02:01.380 LIB libspdk_ioat.a 00:02:01.380 SO libspdk_ioat.so.7.0 00:02:01.380 SYMLINK libspdk_dma.so 00:02:01.380 SYMLINK libspdk_ioat.so 00:02:01.380 LIB libspdk_vfio_user.a 00:02:01.380 SO libspdk_vfio_user.so.5.0 00:02:01.380 LIB libspdk_util.a 00:02:01.380 SYMLINK libspdk_vfio_user.so 00:02:01.380 SO libspdk_util.so.10.1 00:02:01.380 SYMLINK libspdk_util.so 00:02:01.380 LIB libspdk_trace_parser.a 00:02:01.380 SO libspdk_trace_parser.so.6.0 00:02:01.380 SYMLINK libspdk_trace_parser.so 00:02:01.380 CC lib/env_dpdk/env.o 00:02:01.380 CC lib/env_dpdk/memory.o 00:02:01.380 CC lib/env_dpdk/pci.o 00:02:01.380 CC lib/vmd/vmd.o 00:02:01.380 CC lib/env_dpdk/init.o 00:02:01.380 CC lib/vmd/led.o 00:02:01.380 CC lib/idxd/idxd.o 00:02:01.380 CC lib/json/json_parse.o 00:02:01.380 CC lib/env_dpdk/threads.o 00:02:01.380 CC lib/json/json_util.o 00:02:01.380 CC lib/idxd/idxd_user.o 00:02:01.380 CC lib/rdma_utils/rdma_utils.o 00:02:01.380 CC lib/env_dpdk/pci_ioat.o 00:02:01.380 CC lib/json/json_write.o 00:02:01.380 CC lib/env_dpdk/pci_virtio.o 00:02:01.380 CC lib/idxd/idxd_kernel.o 00:02:01.380 CC lib/env_dpdk/pci_vmd.o 00:02:01.380 CC lib/env_dpdk/pci_idxd.o 00:02:01.380 CC lib/conf/conf.o 00:02:01.380 CC lib/env_dpdk/pci_event.o 00:02:01.380 CC lib/env_dpdk/sigbus_handler.o 00:02:01.380 CC lib/env_dpdk/pci_dpdk.o 00:02:01.380 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:01.380 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:01.380 LIB libspdk_conf.a 00:02:01.380 SO libspdk_conf.so.6.0 00:02:01.380 LIB libspdk_rdma_utils.a 00:02:01.380 SO libspdk_rdma_utils.so.1.0 00:02:01.380 LIB libspdk_json.a 00:02:01.380 SYMLINK libspdk_conf.so 00:02:01.380 SO libspdk_json.so.6.0 00:02:01.380 SYMLINK libspdk_rdma_utils.so 00:02:01.380 SYMLINK libspdk_json.so 00:02:01.380 LIB libspdk_idxd.a 00:02:01.380 SO libspdk_idxd.so.12.1 00:02:01.380 LIB libspdk_vmd.a 00:02:01.380 SO libspdk_vmd.so.6.0 00:02:01.380 SYMLINK libspdk_idxd.so 00:02:01.638 SYMLINK libspdk_vmd.so 00:02:01.638 CC lib/rdma_provider/common.o 00:02:01.638 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:01.638 CC lib/jsonrpc/jsonrpc_server.o 00:02:01.638 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:01.638 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:01.638 CC lib/jsonrpc/jsonrpc_client.o 00:02:01.897 LIB libspdk_rdma_provider.a 00:02:01.897 SO libspdk_rdma_provider.so.7.0 00:02:01.897 LIB libspdk_jsonrpc.a 00:02:01.897 SO libspdk_jsonrpc.so.6.0 00:02:01.897 SYMLINK libspdk_rdma_provider.so 00:02:01.897 SYMLINK libspdk_jsonrpc.so 00:02:01.897 LIB libspdk_env_dpdk.a 00:02:01.897 SO libspdk_env_dpdk.so.15.1 00:02:02.156 SYMLINK libspdk_env_dpdk.so 00:02:02.156 CC lib/rpc/rpc.o 00:02:02.415 LIB libspdk_rpc.a 00:02:02.415 SO libspdk_rpc.so.6.0 00:02:02.415 SYMLINK libspdk_rpc.so 00:02:02.674 CC lib/keyring/keyring.o 00:02:02.674 CC lib/keyring/keyring_rpc.o 00:02:02.674 CC lib/trace/trace.o 00:02:02.674 CC lib/trace/trace_flags.o 00:02:02.674 CC lib/trace/trace_rpc.o 00:02:02.674 CC lib/notify/notify.o 00:02:02.674 CC lib/notify/notify_rpc.o 00:02:02.933 LIB libspdk_notify.a 00:02:02.933 SO libspdk_notify.so.6.0 00:02:02.933 LIB libspdk_keyring.a 00:02:02.933 LIB libspdk_trace.a 00:02:02.933 SO libspdk_keyring.so.2.0 00:02:02.933 SO libspdk_trace.so.11.0 00:02:02.933 SYMLINK libspdk_notify.so 00:02:03.193 SYMLINK libspdk_keyring.so 00:02:03.193 SYMLINK libspdk_trace.so 00:02:03.452 CC lib/sock/sock.o 00:02:03.452 CC lib/sock/sock_rpc.o 00:02:03.452 CC lib/thread/thread.o 00:02:03.452 CC lib/thread/iobuf.o 00:02:03.711 LIB libspdk_sock.a 00:02:03.711 SO libspdk_sock.so.10.0 00:02:03.711 SYMLINK libspdk_sock.so 00:02:04.279 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:04.279 CC lib/nvme/nvme_ctrlr.o 00:02:04.279 CC lib/nvme/nvme_fabric.o 00:02:04.279 CC lib/nvme/nvme_ns_cmd.o 00:02:04.279 CC lib/nvme/nvme_ns.o 00:02:04.279 CC lib/nvme/nvme_pcie_common.o 00:02:04.279 CC lib/nvme/nvme_pcie.o 00:02:04.279 CC lib/nvme/nvme_qpair.o 00:02:04.279 CC lib/nvme/nvme.o 00:02:04.279 CC lib/nvme/nvme_quirks.o 00:02:04.279 CC lib/nvme/nvme_transport.o 00:02:04.279 CC lib/nvme/nvme_discovery.o 00:02:04.279 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:04.279 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:04.279 CC lib/nvme/nvme_tcp.o 00:02:04.279 CC lib/nvme/nvme_opal.o 00:02:04.279 CC lib/nvme/nvme_io_msg.o 00:02:04.279 CC lib/nvme/nvme_poll_group.o 00:02:04.279 CC lib/nvme/nvme_zns.o 00:02:04.279 CC lib/nvme/nvme_stubs.o 00:02:04.280 CC lib/nvme/nvme_auth.o 00:02:04.280 CC lib/nvme/nvme_cuse.o 00:02:04.280 CC lib/nvme/nvme_vfio_user.o 00:02:04.280 CC lib/nvme/nvme_rdma.o 00:02:04.537 LIB libspdk_thread.a 00:02:04.537 SO libspdk_thread.so.11.0 00:02:04.537 SYMLINK libspdk_thread.so 00:02:04.797 CC lib/blob/blobstore.o 00:02:04.797 CC lib/blob/request.o 00:02:04.797 CC lib/blob/zeroes.o 00:02:04.797 CC lib/blob/blob_bs_dev.o 00:02:04.797 CC lib/init/json_config.o 00:02:04.797 CC lib/init/subsystem.o 00:02:04.797 CC lib/init/subsystem_rpc.o 00:02:04.797 CC lib/init/rpc.o 00:02:04.797 CC lib/fsdev/fsdev.o 00:02:04.797 CC lib/fsdev/fsdev_rpc.o 00:02:04.797 CC lib/fsdev/fsdev_io.o 00:02:04.797 CC lib/vfu_tgt/tgt_endpoint.o 00:02:04.797 CC lib/vfu_tgt/tgt_rpc.o 00:02:04.797 CC lib/accel/accel.o 00:02:04.797 CC lib/accel/accel_rpc.o 00:02:05.055 CC lib/accel/accel_sw.o 00:02:05.055 CC lib/virtio/virtio.o 00:02:05.055 CC lib/virtio/virtio_vhost_user.o 00:02:05.055 CC lib/virtio/virtio_vfio_user.o 00:02:05.055 CC lib/virtio/virtio_pci.o 00:02:05.055 LIB libspdk_init.a 00:02:05.055 SO libspdk_init.so.6.0 00:02:05.314 LIB libspdk_virtio.a 00:02:05.314 LIB libspdk_vfu_tgt.a 00:02:05.314 SYMLINK libspdk_init.so 00:02:05.314 SO libspdk_virtio.so.7.0 00:02:05.314 SO libspdk_vfu_tgt.so.3.0 00:02:05.314 SYMLINK libspdk_virtio.so 00:02:05.314 SYMLINK libspdk_vfu_tgt.so 00:02:05.314 LIB libspdk_fsdev.a 00:02:05.613 SO libspdk_fsdev.so.2.0 00:02:05.613 CC lib/event/app.o 00:02:05.613 CC lib/event/reactor.o 00:02:05.613 CC lib/event/log_rpc.o 00:02:05.613 CC lib/event/app_rpc.o 00:02:05.613 SYMLINK libspdk_fsdev.so 00:02:05.613 CC lib/event/scheduler_static.o 00:02:05.871 LIB libspdk_accel.a 00:02:05.871 SO libspdk_accel.so.16.0 00:02:05.871 LIB libspdk_nvme.a 00:02:05.871 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:05.871 SYMLINK libspdk_accel.so 00:02:05.871 LIB libspdk_event.a 00:02:05.871 SO libspdk_nvme.so.15.0 00:02:05.871 SO libspdk_event.so.14.0 00:02:05.871 SYMLINK libspdk_event.so 00:02:06.131 SYMLINK libspdk_nvme.so 00:02:06.131 CC lib/bdev/bdev.o 00:02:06.131 CC lib/bdev/bdev_rpc.o 00:02:06.131 CC lib/bdev/bdev_zone.o 00:02:06.131 CC lib/bdev/part.o 00:02:06.131 CC lib/bdev/scsi_nvme.o 00:02:06.390 LIB libspdk_fuse_dispatcher.a 00:02:06.390 SO libspdk_fuse_dispatcher.so.1.0 00:02:06.390 SYMLINK libspdk_fuse_dispatcher.so 00:02:06.956 LIB libspdk_blob.a 00:02:07.214 SO libspdk_blob.so.11.0 00:02:07.214 SYMLINK libspdk_blob.so 00:02:07.472 CC lib/blobfs/blobfs.o 00:02:07.472 CC lib/lvol/lvol.o 00:02:07.472 CC lib/blobfs/tree.o 00:02:08.038 LIB libspdk_bdev.a 00:02:08.038 SO libspdk_bdev.so.17.0 00:02:08.038 LIB libspdk_blobfs.a 00:02:08.038 SO libspdk_blobfs.so.10.0 00:02:08.038 LIB libspdk_lvol.a 00:02:08.038 SYMLINK libspdk_bdev.so 00:02:08.038 SO libspdk_lvol.so.10.0 00:02:08.038 SYMLINK libspdk_blobfs.so 00:02:08.297 SYMLINK libspdk_lvol.so 00:02:08.554 CC lib/scsi/dev.o 00:02:08.554 CC lib/nbd/nbd.o 00:02:08.554 CC lib/nvmf/ctrlr.o 00:02:08.554 CC lib/ublk/ublk.o 00:02:08.554 CC lib/scsi/lun.o 00:02:08.554 CC lib/nbd/nbd_rpc.o 00:02:08.554 CC lib/ftl/ftl_core.o 00:02:08.554 CC lib/nvmf/ctrlr_discovery.o 00:02:08.554 CC lib/scsi/port.o 00:02:08.554 CC lib/ublk/ublk_rpc.o 00:02:08.554 CC lib/nvmf/ctrlr_bdev.o 00:02:08.554 CC lib/ftl/ftl_init.o 00:02:08.554 CC lib/scsi/scsi.o 00:02:08.554 CC lib/ftl/ftl_layout.o 00:02:08.554 CC lib/nvmf/subsystem.o 00:02:08.554 CC lib/ftl/ftl_debug.o 00:02:08.555 CC lib/scsi/scsi_bdev.o 00:02:08.555 CC lib/nvmf/nvmf.o 00:02:08.555 CC lib/ftl/ftl_io.o 00:02:08.555 CC lib/scsi/scsi_pr.o 00:02:08.555 CC lib/scsi/scsi_rpc.o 00:02:08.555 CC lib/nvmf/nvmf_rpc.o 00:02:08.555 CC lib/ftl/ftl_sb.o 00:02:08.555 CC lib/ftl/ftl_l2p.o 00:02:08.555 CC lib/nvmf/transport.o 00:02:08.555 CC lib/scsi/task.o 00:02:08.555 CC lib/ftl/ftl_l2p_flat.o 00:02:08.555 CC lib/nvmf/tcp.o 00:02:08.555 CC lib/ftl/ftl_nv_cache.o 00:02:08.555 CC lib/nvmf/stubs.o 00:02:08.555 CC lib/ftl/ftl_band.o 00:02:08.555 CC lib/nvmf/mdns_server.o 00:02:08.555 CC lib/ftl/ftl_band_ops.o 00:02:08.555 CC lib/ftl/ftl_writer.o 00:02:08.555 CC lib/nvmf/vfio_user.o 00:02:08.555 CC lib/nvmf/rdma.o 00:02:08.555 CC lib/ftl/ftl_rq.o 00:02:08.555 CC lib/nvmf/auth.o 00:02:08.555 CC lib/ftl/ftl_reloc.o 00:02:08.555 CC lib/ftl/ftl_l2p_cache.o 00:02:08.555 CC lib/ftl/ftl_p2l_log.o 00:02:08.555 CC lib/ftl/ftl_p2l.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:08.555 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:08.555 CC lib/ftl/utils/ftl_conf.o 00:02:08.555 CC lib/ftl/utils/ftl_md.o 00:02:08.555 CC lib/ftl/utils/ftl_bitmap.o 00:02:08.555 CC lib/ftl/utils/ftl_mempool.o 00:02:08.555 CC lib/ftl/utils/ftl_property.o 00:02:08.555 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:08.555 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:08.555 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:08.555 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:08.555 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:08.555 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:08.555 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.555 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:08.555 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.555 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.555 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:08.555 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.555 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:08.555 CC lib/ftl/base/ftl_base_dev.o 00:02:08.555 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.555 CC lib/ftl/ftl_trace.o 00:02:09.120 LIB libspdk_scsi.a 00:02:09.120 LIB libspdk_nbd.a 00:02:09.120 SO libspdk_nbd.so.7.0 00:02:09.120 SO libspdk_scsi.so.9.0 00:02:09.120 SYMLINK libspdk_nbd.so 00:02:09.120 SYMLINK libspdk_scsi.so 00:02:09.120 LIB libspdk_ublk.a 00:02:09.120 SO libspdk_ublk.so.3.0 00:02:09.378 SYMLINK libspdk_ublk.so 00:02:09.378 CC lib/iscsi/conn.o 00:02:09.378 CC lib/iscsi/init_grp.o 00:02:09.379 CC lib/iscsi/iscsi.o 00:02:09.379 CC lib/iscsi/param.o 00:02:09.379 CC lib/iscsi/portal_grp.o 00:02:09.379 CC lib/iscsi/iscsi_rpc.o 00:02:09.379 CC lib/iscsi/tgt_node.o 00:02:09.379 CC lib/iscsi/iscsi_subsystem.o 00:02:09.379 CC lib/iscsi/task.o 00:02:09.379 CC lib/vhost/vhost.o 00:02:09.379 CC lib/vhost/vhost_scsi.o 00:02:09.379 CC lib/vhost/vhost_rpc.o 00:02:09.379 CC lib/vhost/vhost_blk.o 00:02:09.379 CC lib/vhost/rte_vhost_user.o 00:02:09.379 LIB libspdk_ftl.a 00:02:09.637 SO libspdk_ftl.so.9.0 00:02:09.896 SYMLINK libspdk_ftl.so 00:02:10.154 LIB libspdk_vhost.a 00:02:10.154 LIB libspdk_nvmf.a 00:02:10.413 SO libspdk_vhost.so.8.0 00:02:10.413 SO libspdk_nvmf.so.20.0 00:02:10.413 SYMLINK libspdk_vhost.so 00:02:10.413 LIB libspdk_iscsi.a 00:02:10.413 SYMLINK libspdk_nvmf.so 00:02:10.413 SO libspdk_iscsi.so.8.0 00:02:10.672 SYMLINK libspdk_iscsi.so 00:02:11.241 CC module/env_dpdk/env_dpdk_rpc.o 00:02:11.241 CC module/vfu_device/vfu_virtio.o 00:02:11.241 CC module/vfu_device/vfu_virtio_blk.o 00:02:11.241 CC module/vfu_device/vfu_virtio_scsi.o 00:02:11.241 CC module/vfu_device/vfu_virtio_rpc.o 00:02:11.241 CC module/vfu_device/vfu_virtio_fs.o 00:02:11.241 CC module/sock/posix/posix.o 00:02:11.241 CC module/accel/iaa/accel_iaa.o 00:02:11.241 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:11.241 CC module/accel/iaa/accel_iaa_rpc.o 00:02:11.241 CC module/keyring/file/keyring.o 00:02:11.241 LIB libspdk_env_dpdk_rpc.a 00:02:11.241 CC module/keyring/file/keyring_rpc.o 00:02:11.241 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:11.241 CC module/accel/dsa/accel_dsa.o 00:02:11.241 CC module/accel/dsa/accel_dsa_rpc.o 00:02:11.241 CC module/scheduler/gscheduler/gscheduler.o 00:02:11.241 CC module/accel/ioat/accel_ioat.o 00:02:11.241 CC module/fsdev/aio/fsdev_aio.o 00:02:11.241 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:11.241 CC module/accel/ioat/accel_ioat_rpc.o 00:02:11.241 CC module/fsdev/aio/linux_aio_mgr.o 00:02:11.241 CC module/keyring/linux/keyring.o 00:02:11.241 CC module/keyring/linux/keyring_rpc.o 00:02:11.241 CC module/accel/error/accel_error.o 00:02:11.241 CC module/accel/error/accel_error_rpc.o 00:02:11.241 CC module/blob/bdev/blob_bdev.o 00:02:11.241 SO libspdk_env_dpdk_rpc.so.6.0 00:02:11.499 SYMLINK libspdk_env_dpdk_rpc.so 00:02:11.499 LIB libspdk_keyring_file.a 00:02:11.499 LIB libspdk_scheduler_gscheduler.a 00:02:11.499 LIB libspdk_scheduler_dpdk_governor.a 00:02:11.499 SO libspdk_keyring_file.so.2.0 00:02:11.499 LIB libspdk_keyring_linux.a 00:02:11.499 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.499 LIB libspdk_accel_ioat.a 00:02:11.499 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:11.499 LIB libspdk_accel_iaa.a 00:02:11.499 LIB libspdk_scheduler_dynamic.a 00:02:11.499 SO libspdk_keyring_linux.so.1.0 00:02:11.499 LIB libspdk_accel_error.a 00:02:11.499 SYMLINK libspdk_keyring_file.so 00:02:11.499 SO libspdk_accel_ioat.so.6.0 00:02:11.499 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.499 SO libspdk_scheduler_dynamic.so.4.0 00:02:11.499 SO libspdk_accel_iaa.so.3.0 00:02:11.499 SO libspdk_accel_error.so.2.0 00:02:11.499 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.499 SYMLINK libspdk_keyring_linux.so 00:02:11.499 LIB libspdk_accel_dsa.a 00:02:11.499 LIB libspdk_blob_bdev.a 00:02:11.499 SYMLINK libspdk_accel_ioat.so 00:02:11.499 SO libspdk_accel_dsa.so.5.0 00:02:11.499 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.499 SYMLINK libspdk_accel_iaa.so 00:02:11.499 SYMLINK libspdk_accel_error.so 00:02:11.499 SO libspdk_blob_bdev.so.11.0 00:02:11.758 SYMLINK libspdk_accel_dsa.so 00:02:11.758 LIB libspdk_vfu_device.a 00:02:11.758 SYMLINK libspdk_blob_bdev.so 00:02:11.758 SO libspdk_vfu_device.so.3.0 00:02:11.758 SYMLINK libspdk_vfu_device.so 00:02:11.758 LIB libspdk_fsdev_aio.a 00:02:11.758 LIB libspdk_sock_posix.a 00:02:11.758 SO libspdk_fsdev_aio.so.1.0 00:02:12.016 SO libspdk_sock_posix.so.6.0 00:02:12.016 SYMLINK libspdk_fsdev_aio.so 00:02:12.016 SYMLINK libspdk_sock_posix.so 00:02:12.016 CC module/bdev/error/vbdev_error.o 00:02:12.016 CC module/bdev/error/vbdev_error_rpc.o 00:02:12.016 CC module/bdev/gpt/gpt.o 00:02:12.016 CC module/bdev/gpt/vbdev_gpt.o 00:02:12.016 CC module/bdev/raid/bdev_raid.o 00:02:12.016 CC module/bdev/delay/vbdev_delay.o 00:02:12.016 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:12.016 CC module/bdev/raid/bdev_raid_rpc.o 00:02:12.016 CC module/bdev/raid/bdev_raid_sb.o 00:02:12.016 CC module/bdev/raid/raid0.o 00:02:12.016 CC module/bdev/null/bdev_null.o 00:02:12.016 CC module/bdev/raid/raid1.o 00:02:12.016 CC module/bdev/null/bdev_null_rpc.o 00:02:12.016 CC module/bdev/raid/concat.o 00:02:12.016 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:12.016 CC module/bdev/nvme/bdev_nvme.o 00:02:12.016 CC module/bdev/iscsi/bdev_iscsi.o 00:02:12.016 CC module/bdev/lvol/vbdev_lvol.o 00:02:12.016 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:12.016 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:12.016 CC module/bdev/nvme/nvme_rpc.o 00:02:12.274 CC module/bdev/nvme/vbdev_opal.o 00:02:12.274 CC module/bdev/nvme/bdev_mdns_client.o 00:02:12.274 CC module/blobfs/bdev/blobfs_bdev.o 00:02:12.274 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:12.274 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:12.274 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:12.274 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:12.274 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:12.274 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:12.274 CC module/bdev/malloc/bdev_malloc.o 00:02:12.274 CC module/bdev/split/vbdev_split.o 00:02:12.274 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:12.274 CC module/bdev/split/vbdev_split_rpc.o 00:02:12.274 CC module/bdev/aio/bdev_aio.o 00:02:12.274 CC module/bdev/aio/bdev_aio_rpc.o 00:02:12.274 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:12.274 CC module/bdev/passthru/vbdev_passthru.o 00:02:12.274 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:12.274 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:12.274 CC module/bdev/ftl/bdev_ftl.o 00:02:12.274 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:12.274 LIB libspdk_blobfs_bdev.a 00:02:12.531 LIB libspdk_bdev_error.a 00:02:12.531 LIB libspdk_bdev_split.a 00:02:12.531 SO libspdk_blobfs_bdev.so.6.0 00:02:12.531 SO libspdk_bdev_error.so.6.0 00:02:12.531 LIB libspdk_bdev_gpt.a 00:02:12.531 SO libspdk_bdev_split.so.6.0 00:02:12.531 LIB libspdk_bdev_null.a 00:02:12.531 SO libspdk_bdev_gpt.so.6.0 00:02:12.531 SYMLINK libspdk_blobfs_bdev.so 00:02:12.531 SYMLINK libspdk_bdev_error.so 00:02:12.531 LIB libspdk_bdev_ftl.a 00:02:12.531 LIB libspdk_bdev_passthru.a 00:02:12.531 SO libspdk_bdev_null.so.6.0 00:02:12.531 LIB libspdk_bdev_iscsi.a 00:02:12.531 SO libspdk_bdev_ftl.so.6.0 00:02:12.531 SYMLINK libspdk_bdev_split.so 00:02:12.531 LIB libspdk_bdev_delay.a 00:02:12.531 SYMLINK libspdk_bdev_gpt.so 00:02:12.531 SO libspdk_bdev_passthru.so.6.0 00:02:12.531 SO libspdk_bdev_iscsi.so.6.0 00:02:12.531 LIB libspdk_bdev_zone_block.a 00:02:12.531 LIB libspdk_bdev_malloc.a 00:02:12.531 LIB libspdk_bdev_aio.a 00:02:12.532 SO libspdk_bdev_delay.so.6.0 00:02:12.532 SYMLINK libspdk_bdev_null.so 00:02:12.532 SO libspdk_bdev_zone_block.so.6.0 00:02:12.532 SO libspdk_bdev_malloc.so.6.0 00:02:12.532 SYMLINK libspdk_bdev_ftl.so 00:02:12.532 SO libspdk_bdev_aio.so.6.0 00:02:12.532 SYMLINK libspdk_bdev_iscsi.so 00:02:12.532 SYMLINK libspdk_bdev_passthru.so 00:02:12.532 SYMLINK libspdk_bdev_delay.so 00:02:12.532 SYMLINK libspdk_bdev_zone_block.so 00:02:12.532 LIB libspdk_bdev_lvol.a 00:02:12.532 LIB libspdk_bdev_virtio.a 00:02:12.532 SYMLINK libspdk_bdev_malloc.so 00:02:12.791 SYMLINK libspdk_bdev_aio.so 00:02:12.791 SO libspdk_bdev_lvol.so.6.0 00:02:12.791 SO libspdk_bdev_virtio.so.6.0 00:02:12.791 SYMLINK libspdk_bdev_lvol.so 00:02:12.791 SYMLINK libspdk_bdev_virtio.so 00:02:13.050 LIB libspdk_bdev_raid.a 00:02:13.050 SO libspdk_bdev_raid.so.6.0 00:02:13.050 SYMLINK libspdk_bdev_raid.so 00:02:13.988 LIB libspdk_bdev_nvme.a 00:02:13.988 SO libspdk_bdev_nvme.so.7.1 00:02:14.247 SYMLINK libspdk_bdev_nvme.so 00:02:14.816 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:14.816 CC module/event/subsystems/iobuf/iobuf.o 00:02:14.816 CC module/event/subsystems/vmd/vmd.o 00:02:14.816 CC module/event/subsystems/sock/sock.o 00:02:14.816 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:14.816 CC module/event/subsystems/keyring/keyring.o 00:02:14.816 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:14.816 CC module/event/subsystems/scheduler/scheduler.o 00:02:14.816 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:14.816 CC module/event/subsystems/fsdev/fsdev.o 00:02:15.075 LIB libspdk_event_vfu_tgt.a 00:02:15.075 LIB libspdk_event_keyring.a 00:02:15.075 LIB libspdk_event_sock.a 00:02:15.075 LIB libspdk_event_vmd.a 00:02:15.075 LIB libspdk_event_vhost_blk.a 00:02:15.075 LIB libspdk_event_iobuf.a 00:02:15.075 LIB libspdk_event_fsdev.a 00:02:15.075 LIB libspdk_event_scheduler.a 00:02:15.075 SO libspdk_event_vfu_tgt.so.3.0 00:02:15.075 SO libspdk_event_keyring.so.1.0 00:02:15.075 SO libspdk_event_sock.so.5.0 00:02:15.075 SO libspdk_event_iobuf.so.3.0 00:02:15.075 SO libspdk_event_vmd.so.6.0 00:02:15.075 SO libspdk_event_fsdev.so.1.0 00:02:15.075 SO libspdk_event_vhost_blk.so.3.0 00:02:15.075 SO libspdk_event_scheduler.so.4.0 00:02:15.075 SYMLINK libspdk_event_vfu_tgt.so 00:02:15.075 SYMLINK libspdk_event_keyring.so 00:02:15.075 SYMLINK libspdk_event_fsdev.so 00:02:15.075 SYMLINK libspdk_event_scheduler.so 00:02:15.075 SYMLINK libspdk_event_vmd.so 00:02:15.075 SYMLINK libspdk_event_sock.so 00:02:15.075 SYMLINK libspdk_event_vhost_blk.so 00:02:15.075 SYMLINK libspdk_event_iobuf.so 00:02:15.335 CC module/event/subsystems/accel/accel.o 00:02:15.595 LIB libspdk_event_accel.a 00:02:15.595 SO libspdk_event_accel.so.6.0 00:02:15.595 SYMLINK libspdk_event_accel.so 00:02:15.854 CC module/event/subsystems/bdev/bdev.o 00:02:16.112 LIB libspdk_event_bdev.a 00:02:16.112 SO libspdk_event_bdev.so.6.0 00:02:16.112 SYMLINK libspdk_event_bdev.so 00:02:16.371 CC module/event/subsystems/scsi/scsi.o 00:02:16.630 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:16.630 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:16.630 CC module/event/subsystems/ublk/ublk.o 00:02:16.630 CC module/event/subsystems/nbd/nbd.o 00:02:16.630 LIB libspdk_event_scsi.a 00:02:16.630 LIB libspdk_event_nbd.a 00:02:16.630 LIB libspdk_event_ublk.a 00:02:16.630 SO libspdk_event_scsi.so.6.0 00:02:16.630 SO libspdk_event_nbd.so.6.0 00:02:16.630 SO libspdk_event_ublk.so.3.0 00:02:16.630 LIB libspdk_event_nvmf.a 00:02:16.630 SYMLINK libspdk_event_scsi.so 00:02:16.630 SYMLINK libspdk_event_nbd.so 00:02:16.630 SO libspdk_event_nvmf.so.6.0 00:02:16.630 SYMLINK libspdk_event_ublk.so 00:02:16.888 SYMLINK libspdk_event_nvmf.so 00:02:17.147 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:17.147 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.147 LIB libspdk_event_vhost_scsi.a 00:02:17.147 LIB libspdk_event_iscsi.a 00:02:17.147 SO libspdk_event_vhost_scsi.so.3.0 00:02:17.147 SO libspdk_event_iscsi.so.6.0 00:02:17.407 SYMLINK libspdk_event_vhost_scsi.so 00:02:17.407 SYMLINK libspdk_event_iscsi.so 00:02:17.407 SO libspdk.so.6.0 00:02:17.407 SYMLINK libspdk.so 00:02:17.981 CXX app/trace/trace.o 00:02:17.981 CC app/spdk_lspci/spdk_lspci.o 00:02:17.981 CC test/rpc_client/rpc_client_test.o 00:02:17.981 CC app/trace_record/trace_record.o 00:02:17.981 CC app/spdk_nvme_identify/identify.o 00:02:17.981 CC app/spdk_nvme_perf/perf.o 00:02:17.981 CC app/spdk_top/spdk_top.o 00:02:17.981 TEST_HEADER include/spdk/accel.h 00:02:17.981 TEST_HEADER include/spdk/accel_module.h 00:02:17.981 TEST_HEADER include/spdk/assert.h 00:02:17.981 TEST_HEADER include/spdk/bdev.h 00:02:17.981 TEST_HEADER include/spdk/barrier.h 00:02:17.981 TEST_HEADER include/spdk/base64.h 00:02:17.981 TEST_HEADER include/spdk/bdev_module.h 00:02:17.981 TEST_HEADER include/spdk/bdev_zone.h 00:02:17.981 TEST_HEADER include/spdk/bit_array.h 00:02:17.981 TEST_HEADER include/spdk/blob_bdev.h 00:02:17.981 TEST_HEADER include/spdk/bit_pool.h 00:02:17.981 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:17.981 TEST_HEADER include/spdk/blob.h 00:02:17.981 TEST_HEADER include/spdk/blobfs.h 00:02:17.981 TEST_HEADER include/spdk/conf.h 00:02:17.981 TEST_HEADER include/spdk/config.h 00:02:17.981 TEST_HEADER include/spdk/cpuset.h 00:02:17.981 CC app/spdk_nvme_discover/discovery_aer.o 00:02:17.981 TEST_HEADER include/spdk/crc16.h 00:02:17.981 TEST_HEADER include/spdk/crc32.h 00:02:17.981 TEST_HEADER include/spdk/crc64.h 00:02:17.981 TEST_HEADER include/spdk/dif.h 00:02:17.981 TEST_HEADER include/spdk/dma.h 00:02:17.981 TEST_HEADER include/spdk/endian.h 00:02:17.981 TEST_HEADER include/spdk/env_dpdk.h 00:02:17.981 TEST_HEADER include/spdk/env.h 00:02:17.981 TEST_HEADER include/spdk/event.h 00:02:17.981 TEST_HEADER include/spdk/fd.h 00:02:17.981 TEST_HEADER include/spdk/fd_group.h 00:02:17.981 TEST_HEADER include/spdk/file.h 00:02:17.981 TEST_HEADER include/spdk/fsdev.h 00:02:17.981 TEST_HEADER include/spdk/fsdev_module.h 00:02:17.981 TEST_HEADER include/spdk/ftl.h 00:02:17.981 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:17.981 TEST_HEADER include/spdk/gpt_spec.h 00:02:17.981 TEST_HEADER include/spdk/histogram_data.h 00:02:17.981 TEST_HEADER include/spdk/hexlify.h 00:02:17.981 TEST_HEADER include/spdk/idxd.h 00:02:17.981 TEST_HEADER include/spdk/idxd_spec.h 00:02:17.981 TEST_HEADER include/spdk/init.h 00:02:17.981 TEST_HEADER include/spdk/ioat_spec.h 00:02:17.981 TEST_HEADER include/spdk/ioat.h 00:02:17.981 TEST_HEADER include/spdk/iscsi_spec.h 00:02:17.981 TEST_HEADER include/spdk/json.h 00:02:17.981 CC app/spdk_dd/spdk_dd.o 00:02:17.981 TEST_HEADER include/spdk/keyring.h 00:02:17.981 TEST_HEADER include/spdk/jsonrpc.h 00:02:17.981 TEST_HEADER include/spdk/keyring_module.h 00:02:17.981 TEST_HEADER include/spdk/log.h 00:02:17.981 TEST_HEADER include/spdk/lvol.h 00:02:17.981 TEST_HEADER include/spdk/likely.h 00:02:17.981 TEST_HEADER include/spdk/md5.h 00:02:17.981 TEST_HEADER include/spdk/memory.h 00:02:17.981 TEST_HEADER include/spdk/mmio.h 00:02:17.981 TEST_HEADER include/spdk/nbd.h 00:02:17.981 CC app/iscsi_tgt/iscsi_tgt.o 00:02:17.981 TEST_HEADER include/spdk/net.h 00:02:17.981 TEST_HEADER include/spdk/notify.h 00:02:17.981 TEST_HEADER include/spdk/nvme.h 00:02:17.981 TEST_HEADER include/spdk/nvme_intel.h 00:02:17.981 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:17.981 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:17.981 TEST_HEADER include/spdk/nvme_zns.h 00:02:17.981 TEST_HEADER include/spdk/nvme_spec.h 00:02:17.981 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:17.981 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:17.981 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:17.981 TEST_HEADER include/spdk/nvmf_spec.h 00:02:17.981 TEST_HEADER include/spdk/nvmf.h 00:02:17.981 TEST_HEADER include/spdk/nvmf_transport.h 00:02:17.981 TEST_HEADER include/spdk/opal.h 00:02:17.981 TEST_HEADER include/spdk/opal_spec.h 00:02:17.981 TEST_HEADER include/spdk/pipe.h 00:02:17.981 TEST_HEADER include/spdk/pci_ids.h 00:02:17.981 TEST_HEADER include/spdk/queue.h 00:02:17.981 TEST_HEADER include/spdk/reduce.h 00:02:17.981 TEST_HEADER include/spdk/rpc.h 00:02:17.981 TEST_HEADER include/spdk/scheduler.h 00:02:17.981 TEST_HEADER include/spdk/scsi.h 00:02:17.981 TEST_HEADER include/spdk/scsi_spec.h 00:02:17.981 TEST_HEADER include/spdk/sock.h 00:02:17.981 TEST_HEADER include/spdk/string.h 00:02:17.981 TEST_HEADER include/spdk/stdinc.h 00:02:17.981 TEST_HEADER include/spdk/thread.h 00:02:17.981 TEST_HEADER include/spdk/trace.h 00:02:17.981 TEST_HEADER include/spdk/trace_parser.h 00:02:17.981 TEST_HEADER include/spdk/tree.h 00:02:17.981 TEST_HEADER include/spdk/util.h 00:02:17.981 TEST_HEADER include/spdk/ublk.h 00:02:17.981 CC app/nvmf_tgt/nvmf_main.o 00:02:17.981 TEST_HEADER include/spdk/version.h 00:02:17.981 TEST_HEADER include/spdk/uuid.h 00:02:17.981 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:17.981 TEST_HEADER include/spdk/vhost.h 00:02:17.981 TEST_HEADER include/spdk/vmd.h 00:02:17.981 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:17.981 TEST_HEADER include/spdk/zipf.h 00:02:17.981 TEST_HEADER include/spdk/xor.h 00:02:17.981 CXX test/cpp_headers/accel.o 00:02:17.981 CC app/spdk_tgt/spdk_tgt.o 00:02:17.981 CXX test/cpp_headers/accel_module.o 00:02:17.981 CXX test/cpp_headers/barrier.o 00:02:17.981 CXX test/cpp_headers/assert.o 00:02:17.981 CXX test/cpp_headers/base64.o 00:02:17.981 CXX test/cpp_headers/bdev.o 00:02:17.981 CXX test/cpp_headers/bit_array.o 00:02:17.981 CXX test/cpp_headers/blob_bdev.o 00:02:17.981 CXX test/cpp_headers/bdev_module.o 00:02:17.981 CXX test/cpp_headers/blobfs_bdev.o 00:02:17.981 CXX test/cpp_headers/blobfs.o 00:02:17.981 CXX test/cpp_headers/bdev_zone.o 00:02:17.981 CXX test/cpp_headers/bit_pool.o 00:02:17.981 CXX test/cpp_headers/blob.o 00:02:17.981 CXX test/cpp_headers/conf.o 00:02:17.982 CXX test/cpp_headers/config.o 00:02:17.982 CXX test/cpp_headers/cpuset.o 00:02:17.982 CXX test/cpp_headers/crc32.o 00:02:17.982 CXX test/cpp_headers/crc64.o 00:02:17.982 CXX test/cpp_headers/dma.o 00:02:17.982 CXX test/cpp_headers/crc16.o 00:02:17.982 CXX test/cpp_headers/env.o 00:02:17.982 CXX test/cpp_headers/dif.o 00:02:17.982 CXX test/cpp_headers/endian.o 00:02:17.982 CXX test/cpp_headers/env_dpdk.o 00:02:17.982 CXX test/cpp_headers/fd.o 00:02:17.982 CXX test/cpp_headers/fd_group.o 00:02:17.982 CXX test/cpp_headers/fsdev.o 00:02:17.982 CXX test/cpp_headers/file.o 00:02:17.982 CXX test/cpp_headers/event.o 00:02:17.982 CXX test/cpp_headers/fsdev_module.o 00:02:17.982 CXX test/cpp_headers/fuse_dispatcher.o 00:02:17.982 CXX test/cpp_headers/hexlify.o 00:02:17.982 CXX test/cpp_headers/gpt_spec.o 00:02:17.982 CXX test/cpp_headers/ftl.o 00:02:17.982 CXX test/cpp_headers/histogram_data.o 00:02:17.982 CXX test/cpp_headers/idxd_spec.o 00:02:17.982 CXX test/cpp_headers/idxd.o 00:02:17.982 CXX test/cpp_headers/init.o 00:02:17.982 CXX test/cpp_headers/iscsi_spec.o 00:02:17.982 CXX test/cpp_headers/ioat.o 00:02:17.982 CXX test/cpp_headers/ioat_spec.o 00:02:17.982 CXX test/cpp_headers/jsonrpc.o 00:02:17.982 CXX test/cpp_headers/json.o 00:02:17.982 CXX test/cpp_headers/keyring_module.o 00:02:17.982 CXX test/cpp_headers/keyring.o 00:02:17.982 CXX test/cpp_headers/lvol.o 00:02:17.982 CXX test/cpp_headers/md5.o 00:02:17.982 CXX test/cpp_headers/likely.o 00:02:17.982 CXX test/cpp_headers/log.o 00:02:17.982 CXX test/cpp_headers/mmio.o 00:02:17.982 CXX test/cpp_headers/nbd.o 00:02:17.982 CXX test/cpp_headers/memory.o 00:02:17.982 CXX test/cpp_headers/net.o 00:02:17.982 CXX test/cpp_headers/notify.o 00:02:17.982 CXX test/cpp_headers/nvme.o 00:02:17.982 CXX test/cpp_headers/nvme_intel.o 00:02:17.982 CXX test/cpp_headers/nvme_ocssd.o 00:02:17.982 CXX test/cpp_headers/nvme_spec.o 00:02:17.982 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:17.982 CXX test/cpp_headers/nvme_zns.o 00:02:17.982 CXX test/cpp_headers/nvmf_cmd.o 00:02:17.982 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:17.982 CXX test/cpp_headers/nvmf.o 00:02:17.982 CXX test/cpp_headers/nvmf_spec.o 00:02:17.982 CXX test/cpp_headers/nvmf_transport.o 00:02:17.982 CC test/env/vtophys/vtophys.o 00:02:17.982 CXX test/cpp_headers/opal.o 00:02:17.982 CC test/app/histogram_perf/histogram_perf.o 00:02:17.982 CC test/env/pci/pci_ut.o 00:02:17.982 CC test/app/jsoncat/jsoncat.o 00:02:17.982 CC examples/ioat/perf/perf.o 00:02:17.982 CC examples/util/zipf/zipf.o 00:02:17.982 CC test/env/memory/memory_ut.o 00:02:17.982 CC test/dma/test_dma/test_dma.o 00:02:17.982 CC test/thread/poller_perf/poller_perf.o 00:02:17.982 CC examples/ioat/verify/verify.o 00:02:17.982 CC test/app/stub/stub.o 00:02:17.982 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:17.982 CC app/fio/nvme/fio_plugin.o 00:02:17.982 CC app/fio/bdev/fio_plugin.o 00:02:18.254 LINK spdk_lspci 00:02:18.254 CC test/app/bdev_svc/bdev_svc.o 00:02:18.520 CC test/env/mem_callbacks/mem_callbacks.o 00:02:18.520 LINK rpc_client_test 00:02:18.520 LINK iscsi_tgt 00:02:18.520 LINK nvmf_tgt 00:02:18.520 LINK vtophys 00:02:18.520 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:18.520 LINK jsoncat 00:02:18.520 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:18.520 LINK spdk_nvme_discover 00:02:18.520 LINK histogram_perf 00:02:18.520 LINK zipf 00:02:18.520 LINK interrupt_tgt 00:02:18.520 CXX test/cpp_headers/pci_ids.o 00:02:18.520 CXX test/cpp_headers/opal_spec.o 00:02:18.520 CXX test/cpp_headers/pipe.o 00:02:18.520 CXX test/cpp_headers/queue.o 00:02:18.520 LINK poller_perf 00:02:18.520 CXX test/cpp_headers/reduce.o 00:02:18.520 CXX test/cpp_headers/rpc.o 00:02:18.520 CXX test/cpp_headers/scheduler.o 00:02:18.520 CXX test/cpp_headers/scsi.o 00:02:18.520 CXX test/cpp_headers/scsi_spec.o 00:02:18.520 CXX test/cpp_headers/sock.o 00:02:18.520 CXX test/cpp_headers/stdinc.o 00:02:18.520 CXX test/cpp_headers/string.o 00:02:18.520 CXX test/cpp_headers/thread.o 00:02:18.520 CXX test/cpp_headers/trace.o 00:02:18.520 CXX test/cpp_headers/trace_parser.o 00:02:18.520 CXX test/cpp_headers/tree.o 00:02:18.520 CXX test/cpp_headers/ublk.o 00:02:18.520 CXX test/cpp_headers/util.o 00:02:18.520 CXX test/cpp_headers/uuid.o 00:02:18.520 CXX test/cpp_headers/version.o 00:02:18.520 CXX test/cpp_headers/vfio_user_spec.o 00:02:18.520 CXX test/cpp_headers/vhost.o 00:02:18.520 CXX test/cpp_headers/vfio_user_pci.o 00:02:18.520 CXX test/cpp_headers/vmd.o 00:02:18.520 LINK stub 00:02:18.520 CXX test/cpp_headers/xor.o 00:02:18.520 CXX test/cpp_headers/zipf.o 00:02:18.520 LINK ioat_perf 00:02:18.520 LINK spdk_trace_record 00:02:18.780 LINK spdk_tgt 00:02:18.780 LINK spdk_trace 00:02:18.780 LINK env_dpdk_post_init 00:02:18.780 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:18.780 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:18.780 LINK bdev_svc 00:02:18.780 LINK verify 00:02:18.780 LINK spdk_dd 00:02:19.038 LINK test_dma 00:02:19.038 LINK pci_ut 00:02:19.038 CC examples/idxd/perf/perf.o 00:02:19.039 CC examples/vmd/led/led.o 00:02:19.039 CC examples/vmd/lsvmd/lsvmd.o 00:02:19.039 CC test/event/event_perf/event_perf.o 00:02:19.039 CC test/event/reactor_perf/reactor_perf.o 00:02:19.039 CC examples/sock/hello_world/hello_sock.o 00:02:19.039 CC test/event/reactor/reactor.o 00:02:19.039 CC examples/thread/thread/thread_ex.o 00:02:19.039 CC test/event/app_repeat/app_repeat.o 00:02:19.039 LINK spdk_bdev 00:02:19.039 LINK spdk_nvme 00:02:19.039 CC test/event/scheduler/scheduler.o 00:02:19.039 LINK nvme_fuzz 00:02:19.297 LINK vhost_fuzz 00:02:19.297 LINK lsvmd 00:02:19.297 LINK led 00:02:19.297 CC app/vhost/vhost.o 00:02:19.297 LINK event_perf 00:02:19.297 LINK spdk_nvme_identify 00:02:19.297 LINK spdk_top 00:02:19.297 LINK reactor 00:02:19.297 LINK mem_callbacks 00:02:19.297 LINK reactor_perf 00:02:19.297 LINK app_repeat 00:02:19.297 LINK spdk_nvme_perf 00:02:19.297 LINK hello_sock 00:02:19.297 LINK thread 00:02:19.297 LINK scheduler 00:02:19.297 LINK idxd_perf 00:02:19.297 CC test/nvme/cuse/cuse.o 00:02:19.297 CC test/nvme/fused_ordering/fused_ordering.o 00:02:19.297 CC test/nvme/startup/startup.o 00:02:19.297 CC test/nvme/sgl/sgl.o 00:02:19.297 CC test/nvme/compliance/nvme_compliance.o 00:02:19.297 CC test/nvme/aer/aer.o 00:02:19.297 CC test/nvme/err_injection/err_injection.o 00:02:19.297 CC test/nvme/reset/reset.o 00:02:19.297 CC test/nvme/e2edp/nvme_dp.o 00:02:19.297 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:19.297 CC test/nvme/boot_partition/boot_partition.o 00:02:19.297 CC test/nvme/overhead/overhead.o 00:02:19.297 LINK vhost 00:02:19.297 CC test/nvme/connect_stress/connect_stress.o 00:02:19.555 CC test/nvme/fdp/fdp.o 00:02:19.555 CC test/nvme/simple_copy/simple_copy.o 00:02:19.555 CC test/nvme/reserve/reserve.o 00:02:19.555 CC test/accel/dif/dif.o 00:02:19.555 CC test/blobfs/mkfs/mkfs.o 00:02:19.555 LINK memory_ut 00:02:19.555 CC test/lvol/esnap/esnap.o 00:02:19.555 LINK startup 00:02:19.555 LINK boot_partition 00:02:19.555 LINK connect_stress 00:02:19.555 LINK err_injection 00:02:19.555 LINK fused_ordering 00:02:19.555 LINK doorbell_aers 00:02:19.555 LINK reserve 00:02:19.555 LINK simple_copy 00:02:19.555 LINK nvme_dp 00:02:19.555 LINK aer 00:02:19.555 LINK reset 00:02:19.814 LINK sgl 00:02:19.814 LINK overhead 00:02:19.814 LINK nvme_compliance 00:02:19.814 LINK mkfs 00:02:19.814 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:19.814 CC examples/nvme/hotplug/hotplug.o 00:02:19.814 LINK fdp 00:02:19.814 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:19.814 CC examples/nvme/reconnect/reconnect.o 00:02:19.814 CC examples/nvme/hello_world/hello_world.o 00:02:19.814 CC examples/nvme/arbitration/arbitration.o 00:02:19.814 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:19.814 CC examples/nvme/abort/abort.o 00:02:19.814 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:19.814 CC examples/accel/perf/accel_perf.o 00:02:19.814 CC examples/blob/hello_world/hello_blob.o 00:02:19.814 CC examples/blob/cli/blobcli.o 00:02:19.814 LINK cmb_copy 00:02:19.814 LINK pmr_persistence 00:02:20.071 LINK hotplug 00:02:20.071 LINK hello_world 00:02:20.071 LINK iscsi_fuzz 00:02:20.071 LINK dif 00:02:20.071 LINK arbitration 00:02:20.071 LINK reconnect 00:02:20.071 LINK abort 00:02:20.071 LINK hello_fsdev 00:02:20.071 LINK hello_blob 00:02:20.071 LINK nvme_manage 00:02:20.071 LINK accel_perf 00:02:20.330 LINK blobcli 00:02:20.330 LINK cuse 00:02:20.589 CC test/bdev/bdevio/bdevio.o 00:02:20.589 CC examples/bdev/hello_world/hello_bdev.o 00:02:20.589 CC examples/bdev/bdevperf/bdevperf.o 00:02:20.848 LINK bdevio 00:02:20.848 LINK hello_bdev 00:02:21.417 LINK bdevperf 00:02:21.676 CC examples/nvmf/nvmf/nvmf.o 00:02:22.244 LINK nvmf 00:02:23.182 LINK esnap 00:02:23.442 00:02:23.442 real 0m55.185s 00:02:23.442 user 8m2.192s 00:02:23.442 sys 3m41.769s 00:02:23.442 09:34:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:23.442 09:34:46 make -- common/autotest_common.sh@10 -- $ set +x 00:02:23.442 ************************************ 00:02:23.442 END TEST make 00:02:23.442 ************************************ 00:02:23.442 09:34:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:23.442 09:34:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:23.442 09:34:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:23.442 09:34:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.442 09:34:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:23.442 09:34:46 -- pm/common@44 -- $ pid=2636363 00:02:23.442 09:34:46 -- pm/common@50 -- $ kill -TERM 2636363 00:02:23.442 09:34:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.442 09:34:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:23.442 09:34:46 -- pm/common@44 -- $ pid=2636365 00:02:23.442 09:34:46 -- pm/common@50 -- $ kill -TERM 2636365 00:02:23.442 09:34:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.442 09:34:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:23.442 09:34:46 -- pm/common@44 -- $ pid=2636366 00:02:23.442 09:34:46 -- pm/common@50 -- $ kill -TERM 2636366 00:02:23.442 09:34:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.442 09:34:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:23.442 09:34:46 -- pm/common@44 -- $ pid=2636390 00:02:23.442 09:34:46 -- pm/common@50 -- $ sudo -E kill -TERM 2636390 00:02:23.442 09:34:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:23.442 09:34:46 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:23.442 09:34:46 -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:02:23.442 09:34:46 -- common/autotest_common.sh@1703 -- # lcov --version 00:02:23.442 09:34:46 -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:02:23.702 09:34:46 -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:02:23.702 09:34:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:23.702 09:34:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:23.702 09:34:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:23.702 09:34:46 -- scripts/common.sh@336 -- # IFS=.-: 00:02:23.702 09:34:46 -- scripts/common.sh@336 -- # read -ra ver1 00:02:23.702 09:34:46 -- scripts/common.sh@337 -- # IFS=.-: 00:02:23.702 09:34:46 -- scripts/common.sh@337 -- # read -ra ver2 00:02:23.702 09:34:46 -- scripts/common.sh@338 -- # local 'op=<' 00:02:23.702 09:34:46 -- scripts/common.sh@340 -- # ver1_l=2 00:02:23.702 09:34:46 -- scripts/common.sh@341 -- # ver2_l=1 00:02:23.702 09:34:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:23.702 09:34:46 -- scripts/common.sh@344 -- # case "$op" in 00:02:23.702 09:34:46 -- scripts/common.sh@345 -- # : 1 00:02:23.702 09:34:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:23.702 09:34:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:23.702 09:34:46 -- scripts/common.sh@365 -- # decimal 1 00:02:23.702 09:34:46 -- scripts/common.sh@353 -- # local d=1 00:02:23.702 09:34:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:23.702 09:34:46 -- scripts/common.sh@355 -- # echo 1 00:02:23.702 09:34:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:23.702 09:34:46 -- scripts/common.sh@366 -- # decimal 2 00:02:23.702 09:34:46 -- scripts/common.sh@353 -- # local d=2 00:02:23.702 09:34:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:23.702 09:34:46 -- scripts/common.sh@355 -- # echo 2 00:02:23.702 09:34:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:23.702 09:34:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:23.702 09:34:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:23.702 09:34:46 -- scripts/common.sh@368 -- # return 0 00:02:23.702 09:34:46 -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:23.702 09:34:46 -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:02:23.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.702 --rc genhtml_branch_coverage=1 00:02:23.702 --rc genhtml_function_coverage=1 00:02:23.702 --rc genhtml_legend=1 00:02:23.702 --rc geninfo_all_blocks=1 00:02:23.702 --rc geninfo_unexecuted_blocks=1 00:02:23.702 00:02:23.702 ' 00:02:23.702 09:34:46 -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:02:23.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.702 --rc genhtml_branch_coverage=1 00:02:23.702 --rc genhtml_function_coverage=1 00:02:23.702 --rc genhtml_legend=1 00:02:23.702 --rc geninfo_all_blocks=1 00:02:23.702 --rc geninfo_unexecuted_blocks=1 00:02:23.702 00:02:23.702 ' 00:02:23.702 09:34:46 -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:02:23.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.702 --rc genhtml_branch_coverage=1 00:02:23.702 --rc genhtml_function_coverage=1 00:02:23.702 --rc genhtml_legend=1 00:02:23.702 --rc geninfo_all_blocks=1 00:02:23.702 --rc geninfo_unexecuted_blocks=1 00:02:23.702 00:02:23.702 ' 00:02:23.702 09:34:46 -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:02:23.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.702 --rc genhtml_branch_coverage=1 00:02:23.702 --rc genhtml_function_coverage=1 00:02:23.702 --rc genhtml_legend=1 00:02:23.702 --rc geninfo_all_blocks=1 00:02:23.702 --rc geninfo_unexecuted_blocks=1 00:02:23.702 00:02:23.702 ' 00:02:23.702 09:34:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:23.702 09:34:46 -- nvmf/common.sh@7 -- # uname -s 00:02:23.702 09:34:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:23.702 09:34:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:23.702 09:34:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:23.702 09:34:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:23.702 09:34:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:23.702 09:34:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:23.702 09:34:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:23.702 09:34:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:23.702 09:34:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:23.702 09:34:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:23.702 09:34:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:23.702 09:34:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:23.702 09:34:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:23.702 09:34:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:23.702 09:34:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:23.702 09:34:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:23.702 09:34:46 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:23.702 09:34:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:23.702 09:34:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:23.702 09:34:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.702 09:34:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.702 09:34:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.702 09:34:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.702 09:34:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.702 09:34:46 -- paths/export.sh@5 -- # export PATH 00:02:23.702 09:34:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.702 09:34:46 -- nvmf/common.sh@51 -- # : 0 00:02:23.702 09:34:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:23.702 09:34:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:23.702 09:34:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:23.702 09:34:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:23.702 09:34:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:23.702 09:34:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:23.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:23.702 09:34:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:23.702 09:34:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:23.702 09:34:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:23.702 09:34:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.702 09:34:46 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.702 09:34:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:23.703 09:34:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:23.703 09:34:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.703 09:34:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:23.703 09:34:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.703 09:34:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:23.703 09:34:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:23.703 09:34:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:23.703 09:34:46 -- spdk/autotest.sh@48 -- # udevadm_pid=2698799 00:02:23.703 09:34:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:23.703 09:34:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:23.703 09:34:46 -- pm/common@17 -- # local monitor 00:02:23.703 09:34:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.703 09:34:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.703 09:34:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.703 09:34:46 -- pm/common@21 -- # date +%s 00:02:23.703 09:34:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.703 09:34:46 -- pm/common@21 -- # date +%s 00:02:23.703 09:34:46 -- pm/common@25 -- # sleep 1 00:02:23.703 09:34:46 -- pm/common@21 -- # date +%s 00:02:23.703 09:34:46 -- pm/common@21 -- # date +%s 00:02:23.703 09:34:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091686 00:02:23.703 09:34:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091686 00:02:23.703 09:34:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091686 00:02:23.703 09:34:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732091686 00:02:23.703 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091686_collect-cpu-load.pm.log 00:02:23.703 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091686_collect-vmstat.pm.log 00:02:23.703 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091686_collect-cpu-temp.pm.log 00:02:23.703 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732091686_collect-bmc-pm.bmc.pm.log 00:02:24.642 09:34:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:24.642 09:34:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:24.642 09:34:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:24.642 09:34:47 -- common/autotest_common.sh@10 -- # set +x 00:02:24.642 09:34:47 -- spdk/autotest.sh@59 -- # create_test_list 00:02:24.642 09:34:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:24.642 09:34:47 -- common/autotest_common.sh@10 -- # set +x 00:02:24.642 09:34:47 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:24.642 09:34:47 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.642 09:34:47 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.642 09:34:47 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:24.642 09:34:47 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.642 09:34:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:24.642 09:34:47 -- common/autotest_common.sh@1457 -- # uname 00:02:24.642 09:34:47 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:24.642 09:34:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:24.642 09:34:47 -- common/autotest_common.sh@1477 -- # uname 00:02:24.642 09:34:47 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:24.642 09:34:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:24.642 09:34:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:24.901 lcov: LCOV version 1.15 00:02:24.901 09:34:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:46.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:46.843 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:50.137 09:35:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:50.137 09:35:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:50.137 09:35:13 -- common/autotest_common.sh@10 -- # set +x 00:02:50.137 09:35:13 -- spdk/autotest.sh@78 -- # rm -f 00:02:50.137 09:35:13 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.675 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:52.934 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:52.934 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:53.194 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:53.194 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:53.194 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:53.194 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:53.194 09:35:16 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:53.194 09:35:16 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:53.194 09:35:16 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:53.194 09:35:16 -- common/autotest_common.sh@1658 -- # local nvme bdf ns is_zoned=0 00:02:53.194 09:35:16 -- common/autotest_common.sh@1665 -- # for nvme in /sys/class/nvme/nvme* 00:02:53.194 09:35:16 -- common/autotest_common.sh@1666 -- # bdf=0000:5e:00.0 00:02:53.194 09:35:16 -- common/autotest_common.sh@1666 -- # is_zoned=0 00:02:53.194 09:35:16 -- common/autotest_common.sh@1667 -- # for ns in "$nvme/"nvme*n* 00:02:53.194 09:35:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:02:53.194 09:35:16 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:53.194 09:35:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.194 09:35:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:53.194 09:35:16 -- common/autotest_common.sh@1668 -- # (( is_zoned == 1 )) 00:02:53.194 09:35:16 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:53.194 09:35:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:53.194 09:35:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:53.194 09:35:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:53.194 09:35:16 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:53.194 09:35:16 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:53.194 No valid GPT data, bailing 00:02:53.194 09:35:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.194 09:35:16 -- scripts/common.sh@394 -- # pt= 00:02:53.194 09:35:16 -- scripts/common.sh@395 -- # return 1 00:02:53.194 09:35:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:53.194 1+0 records in 00:02:53.194 1+0 records out 00:02:53.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548983 s, 191 MB/s 00:02:53.194 09:35:16 -- spdk/autotest.sh@105 -- # sync 00:02:53.194 09:35:16 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:53.194 09:35:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:53.194 09:35:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:59.946 09:35:22 -- spdk/autotest.sh@111 -- # uname -s 00:02:59.946 09:35:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:59.946 09:35:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:59.946 09:35:22 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:01.854 Hugepages 00:03:01.854 node hugesize free / total 00:03:01.854 node0 1048576kB 0 / 0 00:03:01.854 node0 2048kB 0 / 0 00:03:01.854 node1 1048576kB 0 / 0 00:03:01.854 node1 2048kB 0 / 0 00:03:01.854 00:03:01.854 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.854 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:01.854 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:01.854 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:01.854 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:01.855 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:01.855 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:01.855 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:01.855 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:01.855 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:01.855 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:01.855 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:01.855 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:01.855 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:01.855 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:01.855 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:01.855 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:01.855 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:01.855 09:35:25 -- spdk/autotest.sh@117 -- # uname -s 00:03:01.855 09:35:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:01.855 09:35:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:01.855 09:35:25 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.145 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:05.145 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:05.405 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:05.665 09:35:28 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:06.603 09:35:29 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:06.603 09:35:29 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:06.603 09:35:29 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:06.603 09:35:29 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:06.603 09:35:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:06.603 09:35:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:06.603 09:35:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:06.603 09:35:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:06.603 09:35:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:06.862 09:35:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:06.862 09:35:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:06.862 09:35:29 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.399 Waiting for block devices as requested 00:03:09.399 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:09.659 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:09.659 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:09.919 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:09.919 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:09.919 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:09.919 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:10.179 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:10.179 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:10.179 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:10.438 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:10.438 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:10.438 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:10.698 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:10.698 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:10.698 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:10.698 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:10.957 09:35:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:10.957 09:35:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:10.957 09:35:34 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:10.957 09:35:34 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:10.957 09:35:34 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:10.957 09:35:34 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:10.957 09:35:34 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:10.957 09:35:34 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:10.957 09:35:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:10.957 09:35:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:10.957 09:35:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:10.957 09:35:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:10.957 09:35:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:10.957 09:35:34 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:10.957 09:35:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:10.957 09:35:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:10.957 09:35:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:10.957 09:35:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:10.958 09:35:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:10.958 09:35:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:10.958 09:35:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:10.958 09:35:34 -- common/autotest_common.sh@1543 -- # continue 00:03:10.958 09:35:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:10.958 09:35:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:10.958 09:35:34 -- common/autotest_common.sh@10 -- # set +x 00:03:10.958 09:35:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:10.958 09:35:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:10.958 09:35:34 -- common/autotest_common.sh@10 -- # set +x 00:03:10.958 09:35:34 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.249 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:14.249 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:14.818 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.818 09:35:38 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:14.818 09:35:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:14.818 09:35:38 -- common/autotest_common.sh@10 -- # set +x 00:03:14.818 09:35:38 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:14.818 09:35:38 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:14.818 09:35:38 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:14.818 09:35:38 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:14.818 09:35:38 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:14.818 09:35:38 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:14.818 09:35:38 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:14.818 09:35:38 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:14.818 09:35:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:14.818 09:35:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:14.818 09:35:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:14.818 09:35:38 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:14.818 09:35:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:15.078 09:35:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:15.078 09:35:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:15.078 09:35:38 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:15.078 09:35:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:15.078 09:35:38 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:15.078 09:35:38 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:15.078 09:35:38 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:15.078 09:35:38 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:15.078 09:35:38 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:15.078 09:35:38 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:15.078 09:35:38 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2713233 00:03:15.078 09:35:38 -- common/autotest_common.sh@1585 -- # waitforlisten 2713233 00:03:15.078 09:35:38 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:15.078 09:35:38 -- common/autotest_common.sh@835 -- # '[' -z 2713233 ']' 00:03:15.078 09:35:38 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:15.078 09:35:38 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:15.078 09:35:38 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:15.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:15.078 09:35:38 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:15.078 09:35:38 -- common/autotest_common.sh@10 -- # set +x 00:03:15.078 [2024-11-20 09:35:38.232387] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:03:15.078 [2024-11-20 09:35:38.232440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713233 ] 00:03:15.078 [2024-11-20 09:35:38.308640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:15.078 [2024-11-20 09:35:38.351269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:15.337 09:35:38 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:15.337 09:35:38 -- common/autotest_common.sh@868 -- # return 0 00:03:15.337 09:35:38 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:15.337 09:35:38 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:15.337 09:35:38 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:18.628 nvme0n1 00:03:18.628 09:35:41 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:18.628 [2024-11-20 09:35:41.753942] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:18.628 request: 00:03:18.628 { 00:03:18.628 "nvme_ctrlr_name": "nvme0", 00:03:18.628 "password": "test", 00:03:18.628 "method": "bdev_nvme_opal_revert", 00:03:18.628 "req_id": 1 00:03:18.628 } 00:03:18.628 Got JSON-RPC error response 00:03:18.628 response: 00:03:18.628 { 00:03:18.628 "code": -32602, 00:03:18.628 "message": "Invalid parameters" 00:03:18.628 } 00:03:18.628 09:35:41 -- common/autotest_common.sh@1591 -- # true 00:03:18.628 09:35:41 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:18.628 09:35:41 -- common/autotest_common.sh@1595 -- # killprocess 2713233 00:03:18.628 09:35:41 -- common/autotest_common.sh@954 -- # '[' -z 2713233 ']' 00:03:18.628 09:35:41 -- common/autotest_common.sh@958 -- # kill -0 2713233 00:03:18.628 09:35:41 -- common/autotest_common.sh@959 -- # uname 00:03:18.628 09:35:41 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:18.628 09:35:41 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2713233 00:03:18.628 09:35:41 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:18.628 09:35:41 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:18.628 09:35:41 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2713233' 00:03:18.628 killing process with pid 2713233 00:03:18.628 09:35:41 -- common/autotest_common.sh@973 -- # kill 2713233 00:03:18.628 09:35:41 -- common/autotest_common.sh@978 -- # wait 2713233 00:03:20.533 09:35:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:20.533 09:35:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:20.533 09:35:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:20.533 09:35:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:20.533 09:35:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:20.533 09:35:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:20.533 09:35:43 -- common/autotest_common.sh@10 -- # set +x 00:03:20.533 09:35:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:20.533 09:35:43 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:20.533 09:35:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.533 09:35:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.533 09:35:43 -- common/autotest_common.sh@10 -- # set +x 00:03:20.533 ************************************ 00:03:20.533 START TEST env 00:03:20.533 ************************************ 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:20.533 * Looking for test storage... 00:03:20.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1703 -- # lcov --version 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:03:20.533 09:35:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:20.533 09:35:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:20.533 09:35:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:20.533 09:35:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.533 09:35:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:20.533 09:35:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:20.533 09:35:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:20.533 09:35:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:20.533 09:35:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:20.533 09:35:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:20.533 09:35:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:20.533 09:35:43 env -- scripts/common.sh@344 -- # case "$op" in 00:03:20.533 09:35:43 env -- scripts/common.sh@345 -- # : 1 00:03:20.533 09:35:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:20.533 09:35:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.533 09:35:43 env -- scripts/common.sh@365 -- # decimal 1 00:03:20.533 09:35:43 env -- scripts/common.sh@353 -- # local d=1 00:03:20.533 09:35:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.533 09:35:43 env -- scripts/common.sh@355 -- # echo 1 00:03:20.533 09:35:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:20.533 09:35:43 env -- scripts/common.sh@366 -- # decimal 2 00:03:20.533 09:35:43 env -- scripts/common.sh@353 -- # local d=2 00:03:20.533 09:35:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.533 09:35:43 env -- scripts/common.sh@355 -- # echo 2 00:03:20.533 09:35:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:20.533 09:35:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:20.533 09:35:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:20.533 09:35:43 env -- scripts/common.sh@368 -- # return 0 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:03:20.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.533 --rc genhtml_branch_coverage=1 00:03:20.533 --rc genhtml_function_coverage=1 00:03:20.533 --rc genhtml_legend=1 00:03:20.533 --rc geninfo_all_blocks=1 00:03:20.533 --rc geninfo_unexecuted_blocks=1 00:03:20.533 00:03:20.533 ' 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:03:20.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.533 --rc genhtml_branch_coverage=1 00:03:20.533 --rc genhtml_function_coverage=1 00:03:20.533 --rc genhtml_legend=1 00:03:20.533 --rc geninfo_all_blocks=1 00:03:20.533 --rc geninfo_unexecuted_blocks=1 00:03:20.533 00:03:20.533 ' 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:03:20.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.533 --rc genhtml_branch_coverage=1 00:03:20.533 --rc genhtml_function_coverage=1 00:03:20.533 --rc genhtml_legend=1 00:03:20.533 --rc geninfo_all_blocks=1 00:03:20.533 --rc geninfo_unexecuted_blocks=1 00:03:20.533 00:03:20.533 ' 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:03:20.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.533 --rc genhtml_branch_coverage=1 00:03:20.533 --rc genhtml_function_coverage=1 00:03:20.533 --rc genhtml_legend=1 00:03:20.533 --rc geninfo_all_blocks=1 00:03:20.533 --rc geninfo_unexecuted_blocks=1 00:03:20.533 00:03:20.533 ' 00:03:20.533 09:35:43 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.533 09:35:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.533 ************************************ 00:03:20.533 START TEST env_memory 00:03:20.533 ************************************ 00:03:20.533 09:35:43 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.533 00:03:20.533 00:03:20.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.533 http://cunit.sourceforge.net/ 00:03:20.533 00:03:20.533 00:03:20.533 Suite: memory 00:03:20.533 Test: alloc and free memory map ...[2024-11-20 09:35:43.681866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:20.533 passed 00:03:20.533 Test: mem map translation ...[2024-11-20 09:35:43.700160] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:20.533 [2024-11-20 09:35:43.700176] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:20.533 [2024-11-20 09:35:43.700211] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:20.533 [2024-11-20 09:35:43.700218] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:20.533 passed 00:03:20.533 Test: mem map registration ...[2024-11-20 09:35:43.736846] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:20.533 [2024-11-20 09:35:43.736862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:20.533 passed 00:03:20.533 Test: mem map adjacent registrations ...passed 00:03:20.533 00:03:20.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.533 suites 1 1 n/a 0 0 00:03:20.533 tests 4 4 4 0 0 00:03:20.533 asserts 152 152 152 0 n/a 00:03:20.533 00:03:20.533 Elapsed time = 0.134 seconds 00:03:20.533 00:03:20.533 real 0m0.148s 00:03:20.533 user 0m0.140s 00:03:20.533 sys 0m0.007s 00:03:20.533 09:35:43 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:20.533 09:35:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:20.533 ************************************ 00:03:20.533 END TEST env_memory 00:03:20.533 ************************************ 00:03:20.533 09:35:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.533 09:35:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.533 09:35:43 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.533 ************************************ 00:03:20.533 START TEST env_vtophys 00:03:20.533 ************************************ 00:03:20.534 09:35:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:20.794 EAL: lib.eal log level changed from notice to debug 00:03:20.794 EAL: Detected lcore 0 as core 0 on socket 0 00:03:20.794 EAL: Detected lcore 1 as core 1 on socket 0 00:03:20.794 EAL: Detected lcore 2 as core 2 on socket 0 00:03:20.794 EAL: Detected lcore 3 as core 3 on socket 0 00:03:20.794 EAL: Detected lcore 4 as core 4 on socket 0 00:03:20.794 EAL: Detected lcore 5 as core 5 on socket 0 00:03:20.794 EAL: Detected lcore 6 as core 6 on socket 0 00:03:20.794 EAL: Detected lcore 7 as core 8 on socket 0 00:03:20.794 EAL: Detected lcore 8 as core 9 on socket 0 00:03:20.794 EAL: Detected lcore 9 as core 10 on socket 0 00:03:20.794 EAL: Detected lcore 10 as core 11 on socket 0 00:03:20.794 EAL: Detected lcore 11 as core 12 on socket 0 00:03:20.794 EAL: Detected lcore 12 as core 13 on socket 0 00:03:20.794 EAL: Detected lcore 13 as core 16 on socket 0 00:03:20.794 EAL: Detected lcore 14 as core 17 on socket 0 00:03:20.794 EAL: Detected lcore 15 as core 18 on socket 0 00:03:20.794 EAL: Detected lcore 16 as core 19 on socket 0 00:03:20.794 EAL: Detected lcore 17 as core 20 on socket 0 00:03:20.794 EAL: Detected lcore 18 as core 21 on socket 0 00:03:20.794 EAL: Detected lcore 19 as core 25 on socket 0 00:03:20.794 EAL: Detected lcore 20 as core 26 on socket 0 00:03:20.794 EAL: Detected lcore 21 as core 27 on socket 0 00:03:20.794 EAL: Detected lcore 22 as core 28 on socket 0 00:03:20.794 EAL: Detected lcore 23 as core 29 on socket 0 00:03:20.794 EAL: Detected lcore 24 as core 0 on socket 1 00:03:20.794 EAL: Detected lcore 25 as core 1 on socket 1 00:03:20.794 EAL: Detected lcore 26 as core 2 on socket 1 00:03:20.794 EAL: Detected lcore 27 as core 3 on socket 1 00:03:20.794 EAL: Detected lcore 28 as core 4 on socket 1 00:03:20.794 EAL: Detected lcore 29 as core 5 on socket 1 00:03:20.794 EAL: Detected lcore 30 as core 6 on socket 1 00:03:20.794 EAL: Detected lcore 31 as core 9 on socket 1 00:03:20.794 EAL: Detected lcore 32 as core 10 on socket 1 00:03:20.794 EAL: Detected lcore 33 as core 11 on socket 1 00:03:20.794 EAL: Detected lcore 34 as core 12 on socket 1 00:03:20.794 EAL: Detected lcore 35 as core 13 on socket 1 00:03:20.794 EAL: Detected lcore 36 as core 16 on socket 1 00:03:20.794 EAL: Detected lcore 37 as core 17 on socket 1 00:03:20.794 EAL: Detected lcore 38 as core 18 on socket 1 00:03:20.794 EAL: Detected lcore 39 as core 19 on socket 1 00:03:20.794 EAL: Detected lcore 40 as core 20 on socket 1 00:03:20.794 EAL: Detected lcore 41 as core 21 on socket 1 00:03:20.794 EAL: Detected lcore 42 as core 24 on socket 1 00:03:20.794 EAL: Detected lcore 43 as core 25 on socket 1 00:03:20.794 EAL: Detected lcore 44 as core 26 on socket 1 00:03:20.794 EAL: Detected lcore 45 as core 27 on socket 1 00:03:20.794 EAL: Detected lcore 46 as core 28 on socket 1 00:03:20.794 EAL: Detected lcore 47 as core 29 on socket 1 00:03:20.794 EAL: Detected lcore 48 as core 0 on socket 0 00:03:20.794 EAL: Detected lcore 49 as core 1 on socket 0 00:03:20.794 EAL: Detected lcore 50 as core 2 on socket 0 00:03:20.794 EAL: Detected lcore 51 as core 3 on socket 0 00:03:20.794 EAL: Detected lcore 52 as core 4 on socket 0 00:03:20.794 EAL: Detected lcore 53 as core 5 on socket 0 00:03:20.794 EAL: Detected lcore 54 as core 6 on socket 0 00:03:20.794 EAL: Detected lcore 55 as core 8 on socket 0 00:03:20.794 EAL: Detected lcore 56 as core 9 on socket 0 00:03:20.794 EAL: Detected lcore 57 as core 10 on socket 0 00:03:20.794 EAL: Detected lcore 58 as core 11 on socket 0 00:03:20.794 EAL: Detected lcore 59 as core 12 on socket 0 00:03:20.794 EAL: Detected lcore 60 as core 13 on socket 0 00:03:20.794 EAL: Detected lcore 61 as core 16 on socket 0 00:03:20.794 EAL: Detected lcore 62 as core 17 on socket 0 00:03:20.794 EAL: Detected lcore 63 as core 18 on socket 0 00:03:20.794 EAL: Detected lcore 64 as core 19 on socket 0 00:03:20.794 EAL: Detected lcore 65 as core 20 on socket 0 00:03:20.794 EAL: Detected lcore 66 as core 21 on socket 0 00:03:20.794 EAL: Detected lcore 67 as core 25 on socket 0 00:03:20.794 EAL: Detected lcore 68 as core 26 on socket 0 00:03:20.794 EAL: Detected lcore 69 as core 27 on socket 0 00:03:20.794 EAL: Detected lcore 70 as core 28 on socket 0 00:03:20.794 EAL: Detected lcore 71 as core 29 on socket 0 00:03:20.794 EAL: Detected lcore 72 as core 0 on socket 1 00:03:20.794 EAL: Detected lcore 73 as core 1 on socket 1 00:03:20.794 EAL: Detected lcore 74 as core 2 on socket 1 00:03:20.794 EAL: Detected lcore 75 as core 3 on socket 1 00:03:20.794 EAL: Detected lcore 76 as core 4 on socket 1 00:03:20.794 EAL: Detected lcore 77 as core 5 on socket 1 00:03:20.795 EAL: Detected lcore 78 as core 6 on socket 1 00:03:20.795 EAL: Detected lcore 79 as core 9 on socket 1 00:03:20.795 EAL: Detected lcore 80 as core 10 on socket 1 00:03:20.795 EAL: Detected lcore 81 as core 11 on socket 1 00:03:20.795 EAL: Detected lcore 82 as core 12 on socket 1 00:03:20.795 EAL: Detected lcore 83 as core 13 on socket 1 00:03:20.795 EAL: Detected lcore 84 as core 16 on socket 1 00:03:20.795 EAL: Detected lcore 85 as core 17 on socket 1 00:03:20.795 EAL: Detected lcore 86 as core 18 on socket 1 00:03:20.795 EAL: Detected lcore 87 as core 19 on socket 1 00:03:20.795 EAL: Detected lcore 88 as core 20 on socket 1 00:03:20.795 EAL: Detected lcore 89 as core 21 on socket 1 00:03:20.795 EAL: Detected lcore 90 as core 24 on socket 1 00:03:20.795 EAL: Detected lcore 91 as core 25 on socket 1 00:03:20.795 EAL: Detected lcore 92 as core 26 on socket 1 00:03:20.795 EAL: Detected lcore 93 as core 27 on socket 1 00:03:20.795 EAL: Detected lcore 94 as core 28 on socket 1 00:03:20.795 EAL: Detected lcore 95 as core 29 on socket 1 00:03:20.795 EAL: Maximum logical cores by configuration: 128 00:03:20.795 EAL: Detected CPU lcores: 96 00:03:20.795 EAL: Detected NUMA nodes: 2 00:03:20.795 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:20.795 EAL: Detected shared linkage of DPDK 00:03:20.795 EAL: No shared files mode enabled, IPC will be disabled 00:03:20.795 EAL: Bus pci wants IOVA as 'DC' 00:03:20.795 EAL: Buses did not request a specific IOVA mode. 00:03:20.795 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:20.795 EAL: Selected IOVA mode 'VA' 00:03:20.795 EAL: Probing VFIO support... 00:03:20.795 EAL: IOMMU type 1 (Type 1) is supported 00:03:20.795 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:20.795 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:20.795 EAL: VFIO support initialized 00:03:20.795 EAL: Ask a virtual area of 0x2e000 bytes 00:03:20.795 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:20.795 EAL: Setting up physically contiguous memory... 00:03:20.795 EAL: Setting maximum number of open files to 524288 00:03:20.795 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:20.795 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:20.795 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:20.795 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.795 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:20.795 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.795 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.795 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:20.795 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:20.795 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.795 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:20.795 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.795 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.795 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:20.795 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:20.795 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.795 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:20.795 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.795 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.795 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:20.795 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:20.795 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.795 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:20.795 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:20.795 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.795 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:20.795 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:20.795 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:20.795 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.795 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:20.795 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.795 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.795 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:20.795 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:20.795 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.795 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:20.795 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.795 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.795 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:20.795 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:20.795 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.795 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:20.795 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.795 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.795 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:20.795 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:20.795 EAL: Ask a virtual area of 0x61000 bytes 00:03:20.795 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:20.795 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:20.795 EAL: Ask a virtual area of 0x400000000 bytes 00:03:20.795 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:20.795 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:20.795 EAL: Hugepages will be freed exactly as allocated. 00:03:20.795 EAL: No shared files mode enabled, IPC is disabled 00:03:20.795 EAL: No shared files mode enabled, IPC is disabled 00:03:20.795 EAL: TSC frequency is ~2300000 KHz 00:03:20.795 EAL: Main lcore 0 is ready (tid=7f6327bcba00;cpuset=[0]) 00:03:20.795 EAL: Trying to obtain current memory policy. 00:03:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.795 EAL: Restoring previous memory policy: 0 00:03:20.795 EAL: request: mp_malloc_sync 00:03:20.795 EAL: No shared files mode enabled, IPC is disabled 00:03:20.795 EAL: Heap on socket 0 was expanded by 2MB 00:03:20.795 EAL: No shared files mode enabled, IPC is disabled 00:03:20.795 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:20.795 EAL: Mem event callback 'spdk:(nil)' registered 00:03:20.795 00:03:20.795 00:03:20.795 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.795 http://cunit.sourceforge.net/ 00:03:20.795 00:03:20.795 00:03:20.795 Suite: components_suite 00:03:20.795 Test: vtophys_malloc_test ...passed 00:03:20.795 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.795 EAL: Restoring previous memory policy: 4 00:03:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.795 EAL: request: mp_malloc_sync 00:03:20.795 EAL: No shared files mode enabled, IPC is disabled 00:03:20.795 EAL: Heap on socket 0 was expanded by 4MB 00:03:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.795 EAL: request: mp_malloc_sync 00:03:20.795 EAL: No shared files mode enabled, IPC is disabled 00:03:20.795 EAL: Heap on socket 0 was shrunk by 4MB 00:03:20.795 EAL: Trying to obtain current memory policy. 00:03:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.795 EAL: Restoring previous memory policy: 4 00:03:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.795 EAL: request: mp_malloc_sync 00:03:20.795 EAL: No shared files mode enabled, IPC is disabled 00:03:20.795 EAL: Heap on socket 0 was expanded by 6MB 00:03:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.795 EAL: request: mp_malloc_sync 00:03:20.795 EAL: No shared files mode enabled, IPC is disabled 00:03:20.795 EAL: Heap on socket 0 was shrunk by 6MB 00:03:20.795 EAL: Trying to obtain current memory policy. 00:03:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.795 EAL: Restoring previous memory policy: 4 00:03:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.795 EAL: request: mp_malloc_sync 00:03:20.795 EAL: No shared files mode enabled, IPC is disabled 00:03:20.795 EAL: Heap on socket 0 was expanded by 10MB 00:03:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.795 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was shrunk by 10MB 00:03:20.796 EAL: Trying to obtain current memory policy. 00:03:20.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.796 EAL: Restoring previous memory policy: 4 00:03:20.796 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.796 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was expanded by 18MB 00:03:20.796 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.796 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was shrunk by 18MB 00:03:20.796 EAL: Trying to obtain current memory policy. 00:03:20.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.796 EAL: Restoring previous memory policy: 4 00:03:20.796 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.796 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was expanded by 34MB 00:03:20.796 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.796 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was shrunk by 34MB 00:03:20.796 EAL: Trying to obtain current memory policy. 00:03:20.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.796 EAL: Restoring previous memory policy: 4 00:03:20.796 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.796 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was expanded by 66MB 00:03:20.796 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.796 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was shrunk by 66MB 00:03:20.796 EAL: Trying to obtain current memory policy. 00:03:20.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.796 EAL: Restoring previous memory policy: 4 00:03:20.796 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.796 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was expanded by 130MB 00:03:20.796 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.796 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was shrunk by 130MB 00:03:20.796 EAL: Trying to obtain current memory policy. 00:03:20.796 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:20.796 EAL: Restoring previous memory policy: 4 00:03:20.796 EAL: Calling mem event callback 'spdk:(nil)' 00:03:20.796 EAL: request: mp_malloc_sync 00:03:20.796 EAL: No shared files mode enabled, IPC is disabled 00:03:20.796 EAL: Heap on socket 0 was expanded by 258MB 00:03:21.055 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.055 EAL: request: mp_malloc_sync 00:03:21.055 EAL: No shared files mode enabled, IPC is disabled 00:03:21.055 EAL: Heap on socket 0 was shrunk by 258MB 00:03:21.055 EAL: Trying to obtain current memory policy. 00:03:21.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.055 EAL: Restoring previous memory policy: 4 00:03:21.055 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.055 EAL: request: mp_malloc_sync 00:03:21.055 EAL: No shared files mode enabled, IPC is disabled 00:03:21.055 EAL: Heap on socket 0 was expanded by 514MB 00:03:21.055 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.314 EAL: request: mp_malloc_sync 00:03:21.314 EAL: No shared files mode enabled, IPC is disabled 00:03:21.314 EAL: Heap on socket 0 was shrunk by 514MB 00:03:21.314 EAL: Trying to obtain current memory policy. 00:03:21.314 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.573 EAL: Restoring previous memory policy: 4 00:03:21.573 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.573 EAL: request: mp_malloc_sync 00:03:21.573 EAL: No shared files mode enabled, IPC is disabled 00:03:21.573 EAL: Heap on socket 0 was expanded by 1026MB 00:03:21.573 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.833 EAL: request: mp_malloc_sync 00:03:21.833 EAL: No shared files mode enabled, IPC is disabled 00:03:21.833 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:21.833 passed 00:03:21.833 00:03:21.833 Run Summary: Type Total Ran Passed Failed Inactive 00:03:21.833 suites 1 1 n/a 0 0 00:03:21.833 tests 2 2 2 0 0 00:03:21.833 asserts 497 497 497 0 n/a 00:03:21.833 00:03:21.833 Elapsed time = 0.981 seconds 00:03:21.833 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.833 EAL: request: mp_malloc_sync 00:03:21.833 EAL: No shared files mode enabled, IPC is disabled 00:03:21.833 EAL: Heap on socket 0 was shrunk by 2MB 00:03:21.833 EAL: No shared files mode enabled, IPC is disabled 00:03:21.833 EAL: No shared files mode enabled, IPC is disabled 00:03:21.833 EAL: No shared files mode enabled, IPC is disabled 00:03:21.833 00:03:21.833 real 0m1.118s 00:03:21.833 user 0m0.641s 00:03:21.833 sys 0m0.446s 00:03:21.833 09:35:44 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:21.833 09:35:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:21.833 ************************************ 00:03:21.833 END TEST env_vtophys 00:03:21.833 ************************************ 00:03:21.833 09:35:45 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:21.833 09:35:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.833 09:35:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.833 09:35:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:21.833 ************************************ 00:03:21.833 START TEST env_pci 00:03:21.833 ************************************ 00:03:21.833 09:35:45 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:21.833 00:03:21.833 00:03:21.833 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.833 http://cunit.sourceforge.net/ 00:03:21.833 00:03:21.833 00:03:21.833 Suite: pci 00:03:21.833 Test: pci_hook ...[2024-11-20 09:35:45.065457] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2714448 has claimed it 00:03:21.833 EAL: Cannot find device (10000:00:01.0) 00:03:21.833 EAL: Failed to attach device on primary process 00:03:21.833 passed 00:03:21.833 00:03:21.833 Run Summary: Type Total Ran Passed Failed Inactive 00:03:21.833 suites 1 1 n/a 0 0 00:03:21.833 tests 1 1 1 0 0 00:03:21.833 asserts 25 25 25 0 n/a 00:03:21.833 00:03:21.833 Elapsed time = 0.028 seconds 00:03:21.833 00:03:21.833 real 0m0.047s 00:03:21.833 user 0m0.016s 00:03:21.833 sys 0m0.031s 00:03:21.833 09:35:45 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:21.833 09:35:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:21.833 ************************************ 00:03:21.833 END TEST env_pci 00:03:21.833 ************************************ 00:03:21.833 09:35:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:21.833 09:35:45 env -- env/env.sh@15 -- # uname 00:03:21.833 09:35:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:21.833 09:35:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:21.833 09:35:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:21.833 09:35:45 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:21.833 09:35:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.833 09:35:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.092 ************************************ 00:03:22.092 START TEST env_dpdk_post_init 00:03:22.092 ************************************ 00:03:22.092 09:35:45 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.092 EAL: Detected CPU lcores: 96 00:03:22.092 EAL: Detected NUMA nodes: 2 00:03:22.092 EAL: Detected shared linkage of DPDK 00:03:22.092 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.092 EAL: Selected IOVA mode 'VA' 00:03:22.092 EAL: VFIO support initialized 00:03:22.092 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.092 EAL: Using IOMMU type 1 (Type 1) 00:03:22.093 EAL: Ignore mapping IO port bar(1) 00:03:22.093 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:22.093 EAL: Ignore mapping IO port bar(1) 00:03:22.093 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:22.093 EAL: Ignore mapping IO port bar(1) 00:03:22.093 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:22.093 EAL: Ignore mapping IO port bar(1) 00:03:22.093 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:22.093 EAL: Ignore mapping IO port bar(1) 00:03:22.093 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:22.093 EAL: Ignore mapping IO port bar(1) 00:03:22.093 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:22.093 EAL: Ignore mapping IO port bar(1) 00:03:22.093 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:22.093 EAL: Ignore mapping IO port bar(1) 00:03:22.093 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:23.031 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:23.031 EAL: Ignore mapping IO port bar(1) 00:03:23.031 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:23.031 EAL: Ignore mapping IO port bar(1) 00:03:23.031 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:23.031 EAL: Ignore mapping IO port bar(1) 00:03:23.031 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:23.031 EAL: Ignore mapping IO port bar(1) 00:03:23.031 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:23.031 EAL: Ignore mapping IO port bar(1) 00:03:23.031 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:23.031 EAL: Ignore mapping IO port bar(1) 00:03:23.031 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:23.031 EAL: Ignore mapping IO port bar(1) 00:03:23.031 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:23.031 EAL: Ignore mapping IO port bar(1) 00:03:23.031 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:26.325 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:26.325 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:26.325 Starting DPDK initialization... 00:03:26.325 Starting SPDK post initialization... 00:03:26.325 SPDK NVMe probe 00:03:26.325 Attaching to 0000:5e:00.0 00:03:26.325 Attached to 0000:5e:00.0 00:03:26.325 Cleaning up... 00:03:26.325 00:03:26.325 real 0m4.365s 00:03:26.325 user 0m2.984s 00:03:26.325 sys 0m0.452s 00:03:26.325 09:35:49 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.325 09:35:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:26.325 ************************************ 00:03:26.325 END TEST env_dpdk_post_init 00:03:26.325 ************************************ 00:03:26.325 09:35:49 env -- env/env.sh@26 -- # uname 00:03:26.325 09:35:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:26.325 09:35:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.325 09:35:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.325 09:35:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.325 09:35:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.325 ************************************ 00:03:26.325 START TEST env_mem_callbacks 00:03:26.325 ************************************ 00:03:26.325 09:35:49 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:26.325 EAL: Detected CPU lcores: 96 00:03:26.325 EAL: Detected NUMA nodes: 2 00:03:26.325 EAL: Detected shared linkage of DPDK 00:03:26.325 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:26.584 EAL: Selected IOVA mode 'VA' 00:03:26.584 EAL: VFIO support initialized 00:03:26.584 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:26.584 00:03:26.584 00:03:26.584 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.584 http://cunit.sourceforge.net/ 00:03:26.584 00:03:26.584 00:03:26.584 Suite: memory 00:03:26.584 Test: test ... 00:03:26.584 register 0x200000200000 2097152 00:03:26.584 malloc 3145728 00:03:26.584 register 0x200000400000 4194304 00:03:26.584 buf 0x200000500000 len 3145728 PASSED 00:03:26.584 malloc 64 00:03:26.584 buf 0x2000004fff40 len 64 PASSED 00:03:26.584 malloc 4194304 00:03:26.584 register 0x200000800000 6291456 00:03:26.584 buf 0x200000a00000 len 4194304 PASSED 00:03:26.584 free 0x200000500000 3145728 00:03:26.584 free 0x2000004fff40 64 00:03:26.584 unregister 0x200000400000 4194304 PASSED 00:03:26.584 free 0x200000a00000 4194304 00:03:26.584 unregister 0x200000800000 6291456 PASSED 00:03:26.584 malloc 8388608 00:03:26.584 register 0x200000400000 10485760 00:03:26.584 buf 0x200000600000 len 8388608 PASSED 00:03:26.584 free 0x200000600000 8388608 00:03:26.584 unregister 0x200000400000 10485760 PASSED 00:03:26.584 passed 00:03:26.584 00:03:26.584 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.584 suites 1 1 n/a 0 0 00:03:26.584 tests 1 1 1 0 0 00:03:26.584 asserts 15 15 15 0 n/a 00:03:26.584 00:03:26.584 Elapsed time = 0.008 seconds 00:03:26.584 00:03:26.584 real 0m0.060s 00:03:26.584 user 0m0.019s 00:03:26.584 sys 0m0.041s 00:03:26.584 09:35:49 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.584 09:35:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:26.584 ************************************ 00:03:26.584 END TEST env_mem_callbacks 00:03:26.585 ************************************ 00:03:26.585 00:03:26.585 real 0m6.289s 00:03:26.585 user 0m4.036s 00:03:26.585 sys 0m1.330s 00:03:26.585 09:35:49 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.585 09:35:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.585 ************************************ 00:03:26.585 END TEST env 00:03:26.585 ************************************ 00:03:26.585 09:35:49 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:26.585 09:35:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.585 09:35:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.585 09:35:49 -- common/autotest_common.sh@10 -- # set +x 00:03:26.585 ************************************ 00:03:26.585 START TEST rpc 00:03:26.585 ************************************ 00:03:26.585 09:35:49 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:26.585 * Looking for test storage... 00:03:26.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:26.585 09:35:49 rpc -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:03:26.585 09:35:49 rpc -- common/autotest_common.sh@1703 -- # lcov --version 00:03:26.585 09:35:49 rpc -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:03:26.844 09:35:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:26.844 09:35:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:26.844 09:35:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:26.844 09:35:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:26.844 09:35:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:26.844 09:35:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:26.844 09:35:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:26.844 09:35:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:26.844 09:35:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:26.844 09:35:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:26.844 09:35:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:26.844 09:35:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:26.844 09:35:49 rpc -- scripts/common.sh@345 -- # : 1 00:03:26.844 09:35:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:26.844 09:35:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:26.844 09:35:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:26.844 09:35:49 rpc -- scripts/common.sh@353 -- # local d=1 00:03:26.844 09:35:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:26.844 09:35:49 rpc -- scripts/common.sh@355 -- # echo 1 00:03:26.844 09:35:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:26.844 09:35:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:26.844 09:35:49 rpc -- scripts/common.sh@353 -- # local d=2 00:03:26.844 09:35:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:26.844 09:35:49 rpc -- scripts/common.sh@355 -- # echo 2 00:03:26.844 09:35:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:26.844 09:35:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:26.844 09:35:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:26.844 09:35:49 rpc -- scripts/common.sh@368 -- # return 0 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:03:26.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.844 --rc genhtml_branch_coverage=1 00:03:26.844 --rc genhtml_function_coverage=1 00:03:26.844 --rc genhtml_legend=1 00:03:26.844 --rc geninfo_all_blocks=1 00:03:26.844 --rc geninfo_unexecuted_blocks=1 00:03:26.844 00:03:26.844 ' 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:03:26.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.844 --rc genhtml_branch_coverage=1 00:03:26.844 --rc genhtml_function_coverage=1 00:03:26.844 --rc genhtml_legend=1 00:03:26.844 --rc geninfo_all_blocks=1 00:03:26.844 --rc geninfo_unexecuted_blocks=1 00:03:26.844 00:03:26.844 ' 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:03:26.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.844 --rc genhtml_branch_coverage=1 00:03:26.844 --rc genhtml_function_coverage=1 00:03:26.844 --rc genhtml_legend=1 00:03:26.844 --rc geninfo_all_blocks=1 00:03:26.844 --rc geninfo_unexecuted_blocks=1 00:03:26.844 00:03:26.844 ' 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:03:26.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:26.844 --rc genhtml_branch_coverage=1 00:03:26.844 --rc genhtml_function_coverage=1 00:03:26.844 --rc genhtml_legend=1 00:03:26.844 --rc geninfo_all_blocks=1 00:03:26.844 --rc geninfo_unexecuted_blocks=1 00:03:26.844 00:03:26.844 ' 00:03:26.844 09:35:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2715379 00:03:26.844 09:35:49 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:26.844 09:35:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:26.844 09:35:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2715379 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@835 -- # '[' -z 2715379 ']' 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:26.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:26.844 09:35:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.844 [2024-11-20 09:35:50.010512] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:03:26.845 [2024-11-20 09:35:50.010564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715379 ] 00:03:26.845 [2024-11-20 09:35:50.085627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:26.845 [2024-11-20 09:35:50.128379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:26.845 [2024-11-20 09:35:50.128419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2715379' to capture a snapshot of events at runtime. 00:03:26.845 [2024-11-20 09:35:50.128426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:26.845 [2024-11-20 09:35:50.128432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:26.845 [2024-11-20 09:35:50.128438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2715379 for offline analysis/debug. 00:03:26.845 [2024-11-20 09:35:50.128997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:27.104 09:35:50 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:27.104 09:35:50 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:27.104 09:35:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.104 09:35:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:27.104 09:35:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:27.104 09:35:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:27.104 09:35:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.104 09:35:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.104 09:35:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.104 ************************************ 00:03:27.104 START TEST rpc_integrity 00:03:27.104 ************************************ 00:03:27.104 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:27.104 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:27.104 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.104 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.104 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.104 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:27.104 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:27.104 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:27.104 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:27.104 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.104 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.364 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:27.364 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.364 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:27.364 { 00:03:27.364 "name": "Malloc0", 00:03:27.364 "aliases": [ 00:03:27.364 "0cfbac03-5b1e-490a-90d7-b7c4ad76a246" 00:03:27.364 ], 00:03:27.364 "product_name": "Malloc disk", 00:03:27.364 "block_size": 512, 00:03:27.364 "num_blocks": 16384, 00:03:27.364 "uuid": "0cfbac03-5b1e-490a-90d7-b7c4ad76a246", 00:03:27.364 "assigned_rate_limits": { 00:03:27.364 "rw_ios_per_sec": 0, 00:03:27.364 "rw_mbytes_per_sec": 0, 00:03:27.364 "r_mbytes_per_sec": 0, 00:03:27.364 "w_mbytes_per_sec": 0 00:03:27.364 }, 00:03:27.364 "claimed": false, 00:03:27.364 "zoned": false, 00:03:27.364 "supported_io_types": { 00:03:27.364 "read": true, 00:03:27.364 "write": true, 00:03:27.364 "unmap": true, 00:03:27.364 "flush": true, 00:03:27.364 "reset": true, 00:03:27.364 "nvme_admin": false, 00:03:27.364 "nvme_io": false, 00:03:27.364 "nvme_io_md": false, 00:03:27.364 "write_zeroes": true, 00:03:27.364 "zcopy": true, 00:03:27.364 "get_zone_info": false, 00:03:27.364 "zone_management": false, 00:03:27.364 "zone_append": false, 00:03:27.364 "compare": false, 00:03:27.364 "compare_and_write": false, 00:03:27.364 "abort": true, 00:03:27.364 "seek_hole": false, 00:03:27.364 "seek_data": false, 00:03:27.364 "copy": true, 00:03:27.364 "nvme_iov_md": false 00:03:27.364 }, 00:03:27.364 "memory_domains": [ 00:03:27.364 { 00:03:27.364 "dma_device_id": "system", 00:03:27.364 "dma_device_type": 1 00:03:27.364 }, 00:03:27.364 { 00:03:27.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.364 "dma_device_type": 2 00:03:27.364 } 00:03:27.364 ], 00:03:27.364 "driver_specific": {} 00:03:27.364 } 00:03:27.364 ]' 00:03:27.364 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:27.364 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:27.364 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.364 [2024-11-20 09:35:50.502019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:27.364 [2024-11-20 09:35:50.502049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:27.364 [2024-11-20 09:35:50.502061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xefa6e0 00:03:27.364 [2024-11-20 09:35:50.502068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:27.364 [2024-11-20 09:35:50.503188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:27.364 [2024-11-20 09:35:50.503211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:27.364 Passthru0 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.364 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.364 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.364 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:27.364 { 00:03:27.364 "name": "Malloc0", 00:03:27.364 "aliases": [ 00:03:27.364 "0cfbac03-5b1e-490a-90d7-b7c4ad76a246" 00:03:27.364 ], 00:03:27.364 "product_name": "Malloc disk", 00:03:27.364 "block_size": 512, 00:03:27.364 "num_blocks": 16384, 00:03:27.364 "uuid": "0cfbac03-5b1e-490a-90d7-b7c4ad76a246", 00:03:27.364 "assigned_rate_limits": { 00:03:27.364 "rw_ios_per_sec": 0, 00:03:27.364 "rw_mbytes_per_sec": 0, 00:03:27.364 "r_mbytes_per_sec": 0, 00:03:27.364 "w_mbytes_per_sec": 0 00:03:27.364 }, 00:03:27.364 "claimed": true, 00:03:27.364 "claim_type": "exclusive_write", 00:03:27.364 "zoned": false, 00:03:27.364 "supported_io_types": { 00:03:27.364 "read": true, 00:03:27.364 "write": true, 00:03:27.364 "unmap": true, 00:03:27.364 "flush": true, 00:03:27.364 "reset": true, 00:03:27.364 "nvme_admin": false, 00:03:27.364 "nvme_io": false, 00:03:27.364 "nvme_io_md": false, 00:03:27.364 "write_zeroes": true, 00:03:27.364 "zcopy": true, 00:03:27.364 "get_zone_info": false, 00:03:27.364 "zone_management": false, 00:03:27.364 "zone_append": false, 00:03:27.364 "compare": false, 00:03:27.364 "compare_and_write": false, 00:03:27.364 "abort": true, 00:03:27.364 "seek_hole": false, 00:03:27.364 "seek_data": false, 00:03:27.364 "copy": true, 00:03:27.364 "nvme_iov_md": false 00:03:27.364 }, 00:03:27.364 "memory_domains": [ 00:03:27.364 { 00:03:27.364 "dma_device_id": "system", 00:03:27.364 "dma_device_type": 1 00:03:27.364 }, 00:03:27.364 { 00:03:27.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.364 "dma_device_type": 2 00:03:27.364 } 00:03:27.364 ], 00:03:27.364 "driver_specific": {} 00:03:27.364 }, 00:03:27.364 { 00:03:27.364 "name": "Passthru0", 00:03:27.364 "aliases": [ 00:03:27.364 "65faae0a-7ff8-5130-843e-72230ffdc70a" 00:03:27.364 ], 00:03:27.364 "product_name": "passthru", 00:03:27.364 "block_size": 512, 00:03:27.364 "num_blocks": 16384, 00:03:27.364 "uuid": "65faae0a-7ff8-5130-843e-72230ffdc70a", 00:03:27.364 "assigned_rate_limits": { 00:03:27.364 "rw_ios_per_sec": 0, 00:03:27.364 "rw_mbytes_per_sec": 0, 00:03:27.364 "r_mbytes_per_sec": 0, 00:03:27.364 "w_mbytes_per_sec": 0 00:03:27.364 }, 00:03:27.364 "claimed": false, 00:03:27.364 "zoned": false, 00:03:27.364 "supported_io_types": { 00:03:27.364 "read": true, 00:03:27.364 "write": true, 00:03:27.364 "unmap": true, 00:03:27.364 "flush": true, 00:03:27.364 "reset": true, 00:03:27.364 "nvme_admin": false, 00:03:27.364 "nvme_io": false, 00:03:27.364 "nvme_io_md": false, 00:03:27.364 "write_zeroes": true, 00:03:27.364 "zcopy": true, 00:03:27.364 "get_zone_info": false, 00:03:27.364 "zone_management": false, 00:03:27.364 "zone_append": false, 00:03:27.364 "compare": false, 00:03:27.365 "compare_and_write": false, 00:03:27.365 "abort": true, 00:03:27.365 "seek_hole": false, 00:03:27.365 "seek_data": false, 00:03:27.365 "copy": true, 00:03:27.365 "nvme_iov_md": false 00:03:27.365 }, 00:03:27.365 "memory_domains": [ 00:03:27.365 { 00:03:27.365 "dma_device_id": "system", 00:03:27.365 "dma_device_type": 1 00:03:27.365 }, 00:03:27.365 { 00:03:27.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.365 "dma_device_type": 2 00:03:27.365 } 00:03:27.365 ], 00:03:27.365 "driver_specific": { 00:03:27.365 "passthru": { 00:03:27.365 "name": "Passthru0", 00:03:27.365 "base_bdev_name": "Malloc0" 00:03:27.365 } 00:03:27.365 } 00:03:27.365 } 00:03:27.365 ]' 00:03:27.365 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:27.365 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:27.365 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.365 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.365 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.365 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:27.365 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:27.365 09:35:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:27.365 00:03:27.365 real 0m0.270s 00:03:27.365 user 0m0.173s 00:03:27.365 sys 0m0.032s 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.365 09:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:27.365 ************************************ 00:03:27.365 END TEST rpc_integrity 00:03:27.365 ************************************ 00:03:27.365 09:35:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:27.365 09:35:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.365 09:35:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.365 09:35:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.624 ************************************ 00:03:27.624 START TEST rpc_plugins 00:03:27.624 ************************************ 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:27.624 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.624 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:27.624 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.624 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:27.624 { 00:03:27.624 "name": "Malloc1", 00:03:27.624 "aliases": [ 00:03:27.624 "2f9613db-8f48-4f71-b53a-8ada25dfec70" 00:03:27.624 ], 00:03:27.624 "product_name": "Malloc disk", 00:03:27.624 "block_size": 4096, 00:03:27.624 "num_blocks": 256, 00:03:27.624 "uuid": "2f9613db-8f48-4f71-b53a-8ada25dfec70", 00:03:27.624 "assigned_rate_limits": { 00:03:27.624 "rw_ios_per_sec": 0, 00:03:27.624 "rw_mbytes_per_sec": 0, 00:03:27.624 "r_mbytes_per_sec": 0, 00:03:27.624 "w_mbytes_per_sec": 0 00:03:27.624 }, 00:03:27.624 "claimed": false, 00:03:27.624 "zoned": false, 00:03:27.624 "supported_io_types": { 00:03:27.624 "read": true, 00:03:27.624 "write": true, 00:03:27.624 "unmap": true, 00:03:27.624 "flush": true, 00:03:27.624 "reset": true, 00:03:27.624 "nvme_admin": false, 00:03:27.624 "nvme_io": false, 00:03:27.624 "nvme_io_md": false, 00:03:27.624 "write_zeroes": true, 00:03:27.624 "zcopy": true, 00:03:27.624 "get_zone_info": false, 00:03:27.624 "zone_management": false, 00:03:27.624 "zone_append": false, 00:03:27.624 "compare": false, 00:03:27.624 "compare_and_write": false, 00:03:27.624 "abort": true, 00:03:27.624 "seek_hole": false, 00:03:27.624 "seek_data": false, 00:03:27.624 "copy": true, 00:03:27.624 "nvme_iov_md": false 00:03:27.624 }, 00:03:27.624 "memory_domains": [ 00:03:27.624 { 00:03:27.624 "dma_device_id": "system", 00:03:27.624 "dma_device_type": 1 00:03:27.624 }, 00:03:27.624 { 00:03:27.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:27.624 "dma_device_type": 2 00:03:27.624 } 00:03:27.624 ], 00:03:27.624 "driver_specific": {} 00:03:27.624 } 00:03:27.624 ]' 00:03:27.624 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:27.624 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:27.624 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.624 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.624 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:27.625 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.625 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.625 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.625 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:27.625 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:27.625 09:35:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:27.625 00:03:27.625 real 0m0.140s 00:03:27.625 user 0m0.088s 00:03:27.625 sys 0m0.018s 00:03:27.625 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.625 09:35:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:27.625 ************************************ 00:03:27.625 END TEST rpc_plugins 00:03:27.625 ************************************ 00:03:27.625 09:35:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:27.625 09:35:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.625 09:35:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.625 09:35:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.625 ************************************ 00:03:27.625 START TEST rpc_trace_cmd_test 00:03:27.625 ************************************ 00:03:27.625 09:35:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:27.625 09:35:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:27.625 09:35:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:27.625 09:35:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.625 09:35:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:27.625 09:35:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.625 09:35:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:27.625 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2715379", 00:03:27.625 "tpoint_group_mask": "0x8", 00:03:27.625 "iscsi_conn": { 00:03:27.625 "mask": "0x2", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "scsi": { 00:03:27.625 "mask": "0x4", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "bdev": { 00:03:27.625 "mask": "0x8", 00:03:27.625 "tpoint_mask": "0xffffffffffffffff" 00:03:27.625 }, 00:03:27.625 "nvmf_rdma": { 00:03:27.625 "mask": "0x10", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "nvmf_tcp": { 00:03:27.625 "mask": "0x20", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "ftl": { 00:03:27.625 "mask": "0x40", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "blobfs": { 00:03:27.625 "mask": "0x80", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "dsa": { 00:03:27.625 "mask": "0x200", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "thread": { 00:03:27.625 "mask": "0x400", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "nvme_pcie": { 00:03:27.625 "mask": "0x800", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "iaa": { 00:03:27.625 "mask": "0x1000", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "nvme_tcp": { 00:03:27.625 "mask": "0x2000", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "bdev_nvme": { 00:03:27.625 "mask": "0x4000", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "sock": { 00:03:27.625 "mask": "0x8000", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "blob": { 00:03:27.625 "mask": "0x10000", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "bdev_raid": { 00:03:27.625 "mask": "0x20000", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 }, 00:03:27.625 "scheduler": { 00:03:27.625 "mask": "0x40000", 00:03:27.625 "tpoint_mask": "0x0" 00:03:27.625 } 00:03:27.625 }' 00:03:27.625 09:35:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:27.884 09:35:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:27.884 09:35:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:27.885 09:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:27.885 09:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:27.885 09:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:27.885 09:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:27.885 09:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:27.885 09:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:27.885 09:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:27.885 00:03:27.885 real 0m0.226s 00:03:27.885 user 0m0.196s 00:03:27.885 sys 0m0.022s 00:03:27.885 09:35:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.885 09:35:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:27.885 ************************************ 00:03:27.885 END TEST rpc_trace_cmd_test 00:03:27.885 ************************************ 00:03:27.885 09:35:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:27.885 09:35:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:27.885 09:35:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:27.885 09:35:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.885 09:35:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.885 09:35:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.144 ************************************ 00:03:28.144 START TEST rpc_daemon_integrity 00:03:28.144 ************************************ 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.144 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:28.144 { 00:03:28.144 "name": "Malloc2", 00:03:28.144 "aliases": [ 00:03:28.144 "42a234f0-7763-4cd0-8382-e6f75fd15fb8" 00:03:28.144 ], 00:03:28.144 "product_name": "Malloc disk", 00:03:28.144 "block_size": 512, 00:03:28.144 "num_blocks": 16384, 00:03:28.144 "uuid": "42a234f0-7763-4cd0-8382-e6f75fd15fb8", 00:03:28.144 "assigned_rate_limits": { 00:03:28.144 "rw_ios_per_sec": 0, 00:03:28.144 "rw_mbytes_per_sec": 0, 00:03:28.144 "r_mbytes_per_sec": 0, 00:03:28.144 "w_mbytes_per_sec": 0 00:03:28.144 }, 00:03:28.144 "claimed": false, 00:03:28.144 "zoned": false, 00:03:28.144 "supported_io_types": { 00:03:28.144 "read": true, 00:03:28.144 "write": true, 00:03:28.144 "unmap": true, 00:03:28.144 "flush": true, 00:03:28.144 "reset": true, 00:03:28.144 "nvme_admin": false, 00:03:28.144 "nvme_io": false, 00:03:28.144 "nvme_io_md": false, 00:03:28.144 "write_zeroes": true, 00:03:28.144 "zcopy": true, 00:03:28.144 "get_zone_info": false, 00:03:28.144 "zone_management": false, 00:03:28.144 "zone_append": false, 00:03:28.144 "compare": false, 00:03:28.144 "compare_and_write": false, 00:03:28.144 "abort": true, 00:03:28.144 "seek_hole": false, 00:03:28.144 "seek_data": false, 00:03:28.144 "copy": true, 00:03:28.144 "nvme_iov_md": false 00:03:28.144 }, 00:03:28.144 "memory_domains": [ 00:03:28.144 { 00:03:28.144 "dma_device_id": "system", 00:03:28.144 "dma_device_type": 1 00:03:28.144 }, 00:03:28.144 { 00:03:28.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.144 "dma_device_type": 2 00:03:28.144 } 00:03:28.144 ], 00:03:28.144 "driver_specific": {} 00:03:28.144 } 00:03:28.144 ]' 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.145 [2024-11-20 09:35:51.348361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:28.145 [2024-11-20 09:35:51.348390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:28.145 [2024-11-20 09:35:51.348402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf8ab70 00:03:28.145 [2024-11-20 09:35:51.348409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:28.145 [2024-11-20 09:35:51.349407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:28.145 [2024-11-20 09:35:51.349428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:28.145 Passthru0 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:28.145 { 00:03:28.145 "name": "Malloc2", 00:03:28.145 "aliases": [ 00:03:28.145 "42a234f0-7763-4cd0-8382-e6f75fd15fb8" 00:03:28.145 ], 00:03:28.145 "product_name": "Malloc disk", 00:03:28.145 "block_size": 512, 00:03:28.145 "num_blocks": 16384, 00:03:28.145 "uuid": "42a234f0-7763-4cd0-8382-e6f75fd15fb8", 00:03:28.145 "assigned_rate_limits": { 00:03:28.145 "rw_ios_per_sec": 0, 00:03:28.145 "rw_mbytes_per_sec": 0, 00:03:28.145 "r_mbytes_per_sec": 0, 00:03:28.145 "w_mbytes_per_sec": 0 00:03:28.145 }, 00:03:28.145 "claimed": true, 00:03:28.145 "claim_type": "exclusive_write", 00:03:28.145 "zoned": false, 00:03:28.145 "supported_io_types": { 00:03:28.145 "read": true, 00:03:28.145 "write": true, 00:03:28.145 "unmap": true, 00:03:28.145 "flush": true, 00:03:28.145 "reset": true, 00:03:28.145 "nvme_admin": false, 00:03:28.145 "nvme_io": false, 00:03:28.145 "nvme_io_md": false, 00:03:28.145 "write_zeroes": true, 00:03:28.145 "zcopy": true, 00:03:28.145 "get_zone_info": false, 00:03:28.145 "zone_management": false, 00:03:28.145 "zone_append": false, 00:03:28.145 "compare": false, 00:03:28.145 "compare_and_write": false, 00:03:28.145 "abort": true, 00:03:28.145 "seek_hole": false, 00:03:28.145 "seek_data": false, 00:03:28.145 "copy": true, 00:03:28.145 "nvme_iov_md": false 00:03:28.145 }, 00:03:28.145 "memory_domains": [ 00:03:28.145 { 00:03:28.145 "dma_device_id": "system", 00:03:28.145 "dma_device_type": 1 00:03:28.145 }, 00:03:28.145 { 00:03:28.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.145 "dma_device_type": 2 00:03:28.145 } 00:03:28.145 ], 00:03:28.145 "driver_specific": {} 00:03:28.145 }, 00:03:28.145 { 00:03:28.145 "name": "Passthru0", 00:03:28.145 "aliases": [ 00:03:28.145 "89b9abe8-978c-582d-ab6a-e472c474a6db" 00:03:28.145 ], 00:03:28.145 "product_name": "passthru", 00:03:28.145 "block_size": 512, 00:03:28.145 "num_blocks": 16384, 00:03:28.145 "uuid": "89b9abe8-978c-582d-ab6a-e472c474a6db", 00:03:28.145 "assigned_rate_limits": { 00:03:28.145 "rw_ios_per_sec": 0, 00:03:28.145 "rw_mbytes_per_sec": 0, 00:03:28.145 "r_mbytes_per_sec": 0, 00:03:28.145 "w_mbytes_per_sec": 0 00:03:28.145 }, 00:03:28.145 "claimed": false, 00:03:28.145 "zoned": false, 00:03:28.145 "supported_io_types": { 00:03:28.145 "read": true, 00:03:28.145 "write": true, 00:03:28.145 "unmap": true, 00:03:28.145 "flush": true, 00:03:28.145 "reset": true, 00:03:28.145 "nvme_admin": false, 00:03:28.145 "nvme_io": false, 00:03:28.145 "nvme_io_md": false, 00:03:28.145 "write_zeroes": true, 00:03:28.145 "zcopy": true, 00:03:28.145 "get_zone_info": false, 00:03:28.145 "zone_management": false, 00:03:28.145 "zone_append": false, 00:03:28.145 "compare": false, 00:03:28.145 "compare_and_write": false, 00:03:28.145 "abort": true, 00:03:28.145 "seek_hole": false, 00:03:28.145 "seek_data": false, 00:03:28.145 "copy": true, 00:03:28.145 "nvme_iov_md": false 00:03:28.145 }, 00:03:28.145 "memory_domains": [ 00:03:28.145 { 00:03:28.145 "dma_device_id": "system", 00:03:28.145 "dma_device_type": 1 00:03:28.145 }, 00:03:28.145 { 00:03:28.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.145 "dma_device_type": 2 00:03:28.145 } 00:03:28.145 ], 00:03:28.145 "driver_specific": { 00:03:28.145 "passthru": { 00:03:28.145 "name": "Passthru0", 00:03:28.145 "base_bdev_name": "Malloc2" 00:03:28.145 } 00:03:28.145 } 00:03:28.145 } 00:03:28.145 ]' 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:28.145 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:28.404 09:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:28.404 00:03:28.404 real 0m0.275s 00:03:28.404 user 0m0.169s 00:03:28.404 sys 0m0.039s 00:03:28.404 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.404 09:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:28.405 ************************************ 00:03:28.405 END TEST rpc_daemon_integrity 00:03:28.405 ************************************ 00:03:28.405 09:35:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:28.405 09:35:51 rpc -- rpc/rpc.sh@84 -- # killprocess 2715379 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@954 -- # '[' -z 2715379 ']' 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@958 -- # kill -0 2715379 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@959 -- # uname 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2715379 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2715379' 00:03:28.405 killing process with pid 2715379 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@973 -- # kill 2715379 00:03:28.405 09:35:51 rpc -- common/autotest_common.sh@978 -- # wait 2715379 00:03:28.664 00:03:28.664 real 0m2.097s 00:03:28.664 user 0m2.693s 00:03:28.664 sys 0m0.678s 00:03:28.664 09:35:51 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.664 09:35:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.664 ************************************ 00:03:28.664 END TEST rpc 00:03:28.664 ************************************ 00:03:28.664 09:35:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:28.664 09:35:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.664 09:35:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.664 09:35:51 -- common/autotest_common.sh@10 -- # set +x 00:03:28.664 ************************************ 00:03:28.664 START TEST skip_rpc 00:03:28.664 ************************************ 00:03:28.664 09:35:51 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:28.924 * Looking for test storage... 00:03:28.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1703 -- # lcov --version 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:28.924 09:35:52 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:03:28.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.924 --rc genhtml_branch_coverage=1 00:03:28.924 --rc genhtml_function_coverage=1 00:03:28.924 --rc genhtml_legend=1 00:03:28.924 --rc geninfo_all_blocks=1 00:03:28.924 --rc geninfo_unexecuted_blocks=1 00:03:28.924 00:03:28.924 ' 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:03:28.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.924 --rc genhtml_branch_coverage=1 00:03:28.924 --rc genhtml_function_coverage=1 00:03:28.924 --rc genhtml_legend=1 00:03:28.924 --rc geninfo_all_blocks=1 00:03:28.924 --rc geninfo_unexecuted_blocks=1 00:03:28.924 00:03:28.924 ' 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:03:28.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.924 --rc genhtml_branch_coverage=1 00:03:28.924 --rc genhtml_function_coverage=1 00:03:28.924 --rc genhtml_legend=1 00:03:28.924 --rc geninfo_all_blocks=1 00:03:28.924 --rc geninfo_unexecuted_blocks=1 00:03:28.924 00:03:28.924 ' 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:03:28.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.924 --rc genhtml_branch_coverage=1 00:03:28.924 --rc genhtml_function_coverage=1 00:03:28.924 --rc genhtml_legend=1 00:03:28.924 --rc geninfo_all_blocks=1 00:03:28.924 --rc geninfo_unexecuted_blocks=1 00:03:28.924 00:03:28.924 ' 00:03:28.924 09:35:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:28.924 09:35:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:28.924 09:35:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.924 09:35:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.924 ************************************ 00:03:28.924 START TEST skip_rpc 00:03:28.924 ************************************ 00:03:28.924 09:35:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:28.924 09:35:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2716014 00:03:28.924 09:35:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:28.924 09:35:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:28.924 09:35:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:28.924 [2024-11-20 09:35:52.207978] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:03:28.924 [2024-11-20 09:35:52.208015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716014 ] 00:03:29.183 [2024-11-20 09:35:52.282712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.183 [2024-11-20 09:35:52.323259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2716014 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2716014 ']' 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2716014 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716014 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716014' 00:03:34.459 killing process with pid 2716014 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2716014 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2716014 00:03:34.459 00:03:34.459 real 0m5.365s 00:03:34.459 user 0m5.121s 00:03:34.459 sys 0m0.285s 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.459 09:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.459 ************************************ 00:03:34.459 END TEST skip_rpc 00:03:34.459 ************************************ 00:03:34.459 09:35:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:34.459 09:35:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.459 09:35:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.459 09:35:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.459 ************************************ 00:03:34.459 START TEST skip_rpc_with_json 00:03:34.459 ************************************ 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2716958 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2716958 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2716958 ']' 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:34.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:34.459 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.459 [2024-11-20 09:35:57.639893] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:03:34.459 [2024-11-20 09:35:57.639933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2716958 ] 00:03:34.459 [2024-11-20 09:35:57.714300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:34.459 [2024-11-20 09:35:57.753713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.719 [2024-11-20 09:35:57.980641] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:34.719 request: 00:03:34.719 { 00:03:34.719 "trtype": "tcp", 00:03:34.719 "method": "nvmf_get_transports", 00:03:34.719 "req_id": 1 00:03:34.719 } 00:03:34.719 Got JSON-RPC error response 00:03:34.719 response: 00:03:34.719 { 00:03:34.719 "code": -19, 00:03:34.719 "message": "No such device" 00:03:34.719 } 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.719 [2024-11-20 09:35:57.992752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.719 09:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:34.979 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.979 09:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:34.979 { 00:03:34.979 "subsystems": [ 00:03:34.979 { 00:03:34.979 "subsystem": "fsdev", 00:03:34.979 "config": [ 00:03:34.979 { 00:03:34.979 "method": "fsdev_set_opts", 00:03:34.979 "params": { 00:03:34.979 "fsdev_io_pool_size": 65535, 00:03:34.979 "fsdev_io_cache_size": 256 00:03:34.979 } 00:03:34.979 } 00:03:34.979 ] 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "vfio_user_target", 00:03:34.979 "config": null 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "keyring", 00:03:34.979 "config": [] 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "iobuf", 00:03:34.979 "config": [ 00:03:34.979 { 00:03:34.979 "method": "iobuf_set_options", 00:03:34.979 "params": { 00:03:34.979 "small_pool_count": 8192, 00:03:34.979 "large_pool_count": 1024, 00:03:34.979 "small_bufsize": 8192, 00:03:34.979 "large_bufsize": 135168, 00:03:34.979 "enable_numa": false 00:03:34.979 } 00:03:34.979 } 00:03:34.979 ] 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "sock", 00:03:34.979 "config": [ 00:03:34.979 { 00:03:34.979 "method": "sock_set_default_impl", 00:03:34.979 "params": { 00:03:34.979 "impl_name": "posix" 00:03:34.979 } 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "method": "sock_impl_set_options", 00:03:34.979 "params": { 00:03:34.979 "impl_name": "ssl", 00:03:34.979 "recv_buf_size": 4096, 00:03:34.979 "send_buf_size": 4096, 00:03:34.979 "enable_recv_pipe": true, 00:03:34.979 "enable_quickack": false, 00:03:34.979 "enable_placement_id": 0, 00:03:34.979 "enable_zerocopy_send_server": true, 00:03:34.979 "enable_zerocopy_send_client": false, 00:03:34.979 "zerocopy_threshold": 0, 00:03:34.979 "tls_version": 0, 00:03:34.979 "enable_ktls": false 00:03:34.979 } 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "method": "sock_impl_set_options", 00:03:34.979 "params": { 00:03:34.979 "impl_name": "posix", 00:03:34.979 "recv_buf_size": 2097152, 00:03:34.979 "send_buf_size": 2097152, 00:03:34.979 "enable_recv_pipe": true, 00:03:34.979 "enable_quickack": false, 00:03:34.979 "enable_placement_id": 0, 00:03:34.979 "enable_zerocopy_send_server": true, 00:03:34.979 "enable_zerocopy_send_client": false, 00:03:34.979 "zerocopy_threshold": 0, 00:03:34.979 "tls_version": 0, 00:03:34.979 "enable_ktls": false 00:03:34.979 } 00:03:34.979 } 00:03:34.979 ] 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "vmd", 00:03:34.979 "config": [] 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "accel", 00:03:34.979 "config": [ 00:03:34.979 { 00:03:34.979 "method": "accel_set_options", 00:03:34.979 "params": { 00:03:34.979 "small_cache_size": 128, 00:03:34.979 "large_cache_size": 16, 00:03:34.979 "task_count": 2048, 00:03:34.979 "sequence_count": 2048, 00:03:34.979 "buf_count": 2048 00:03:34.979 } 00:03:34.979 } 00:03:34.979 ] 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "bdev", 00:03:34.979 "config": [ 00:03:34.979 { 00:03:34.979 "method": "bdev_set_options", 00:03:34.979 "params": { 00:03:34.979 "bdev_io_pool_size": 65535, 00:03:34.979 "bdev_io_cache_size": 256, 00:03:34.979 "bdev_auto_examine": true, 00:03:34.979 "iobuf_small_cache_size": 128, 00:03:34.979 "iobuf_large_cache_size": 16 00:03:34.979 } 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "method": "bdev_raid_set_options", 00:03:34.979 "params": { 00:03:34.979 "process_window_size_kb": 1024, 00:03:34.979 "process_max_bandwidth_mb_sec": 0 00:03:34.979 } 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "method": "bdev_iscsi_set_options", 00:03:34.979 "params": { 00:03:34.979 "timeout_sec": 30 00:03:34.979 } 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "method": "bdev_nvme_set_options", 00:03:34.979 "params": { 00:03:34.979 "action_on_timeout": "none", 00:03:34.979 "timeout_us": 0, 00:03:34.979 "timeout_admin_us": 0, 00:03:34.979 "keep_alive_timeout_ms": 10000, 00:03:34.979 "arbitration_burst": 0, 00:03:34.979 "low_priority_weight": 0, 00:03:34.979 "medium_priority_weight": 0, 00:03:34.979 "high_priority_weight": 0, 00:03:34.979 "nvme_adminq_poll_period_us": 10000, 00:03:34.979 "nvme_ioq_poll_period_us": 0, 00:03:34.979 "io_queue_requests": 0, 00:03:34.979 "delay_cmd_submit": true, 00:03:34.979 "transport_retry_count": 4, 00:03:34.979 "bdev_retry_count": 3, 00:03:34.979 "transport_ack_timeout": 0, 00:03:34.979 "ctrlr_loss_timeout_sec": 0, 00:03:34.979 "reconnect_delay_sec": 0, 00:03:34.979 "fast_io_fail_timeout_sec": 0, 00:03:34.979 "disable_auto_failback": false, 00:03:34.979 "generate_uuids": false, 00:03:34.979 "transport_tos": 0, 00:03:34.979 "nvme_error_stat": false, 00:03:34.979 "rdma_srq_size": 0, 00:03:34.979 "io_path_stat": false, 00:03:34.979 "allow_accel_sequence": false, 00:03:34.979 "rdma_max_cq_size": 0, 00:03:34.979 "rdma_cm_event_timeout_ms": 0, 00:03:34.979 "dhchap_digests": [ 00:03:34.979 "sha256", 00:03:34.979 "sha384", 00:03:34.979 "sha512" 00:03:34.979 ], 00:03:34.979 "dhchap_dhgroups": [ 00:03:34.979 "null", 00:03:34.979 "ffdhe2048", 00:03:34.979 "ffdhe3072", 00:03:34.979 "ffdhe4096", 00:03:34.979 "ffdhe6144", 00:03:34.979 "ffdhe8192" 00:03:34.979 ] 00:03:34.979 } 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "method": "bdev_nvme_set_hotplug", 00:03:34.979 "params": { 00:03:34.979 "period_us": 100000, 00:03:34.979 "enable": false 00:03:34.979 } 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "method": "bdev_wait_for_examine" 00:03:34.979 } 00:03:34.979 ] 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "scsi", 00:03:34.979 "config": null 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "scheduler", 00:03:34.979 "config": [ 00:03:34.979 { 00:03:34.979 "method": "framework_set_scheduler", 00:03:34.979 "params": { 00:03:34.979 "name": "static" 00:03:34.979 } 00:03:34.979 } 00:03:34.979 ] 00:03:34.979 }, 00:03:34.979 { 00:03:34.979 "subsystem": "vhost_scsi", 00:03:34.980 "config": [] 00:03:34.980 }, 00:03:34.980 { 00:03:34.980 "subsystem": "vhost_blk", 00:03:34.980 "config": [] 00:03:34.980 }, 00:03:34.980 { 00:03:34.980 "subsystem": "ublk", 00:03:34.980 "config": [] 00:03:34.980 }, 00:03:34.980 { 00:03:34.980 "subsystem": "nbd", 00:03:34.980 "config": [] 00:03:34.980 }, 00:03:34.980 { 00:03:34.980 "subsystem": "nvmf", 00:03:34.980 "config": [ 00:03:34.980 { 00:03:34.980 "method": "nvmf_set_config", 00:03:34.980 "params": { 00:03:34.980 "discovery_filter": "match_any", 00:03:34.980 "admin_cmd_passthru": { 00:03:34.980 "identify_ctrlr": false 00:03:34.980 }, 00:03:34.980 "dhchap_digests": [ 00:03:34.980 "sha256", 00:03:34.980 "sha384", 00:03:34.980 "sha512" 00:03:34.980 ], 00:03:34.980 "dhchap_dhgroups": [ 00:03:34.980 "null", 00:03:34.980 "ffdhe2048", 00:03:34.980 "ffdhe3072", 00:03:34.980 "ffdhe4096", 00:03:34.980 "ffdhe6144", 00:03:34.980 "ffdhe8192" 00:03:34.980 ] 00:03:34.980 } 00:03:34.980 }, 00:03:34.980 { 00:03:34.980 "method": "nvmf_set_max_subsystems", 00:03:34.980 "params": { 00:03:34.980 "max_subsystems": 1024 00:03:34.980 } 00:03:34.980 }, 00:03:34.980 { 00:03:34.980 "method": "nvmf_set_crdt", 00:03:34.980 "params": { 00:03:34.980 "crdt1": 0, 00:03:34.980 "crdt2": 0, 00:03:34.980 "crdt3": 0 00:03:34.980 } 00:03:34.980 }, 00:03:34.980 { 00:03:34.980 "method": "nvmf_create_transport", 00:03:34.980 "params": { 00:03:34.980 "trtype": "TCP", 00:03:34.980 "max_queue_depth": 128, 00:03:34.980 "max_io_qpairs_per_ctrlr": 127, 00:03:34.980 "in_capsule_data_size": 4096, 00:03:34.980 "max_io_size": 131072, 00:03:34.980 "io_unit_size": 131072, 00:03:34.980 "max_aq_depth": 128, 00:03:34.980 "num_shared_buffers": 511, 00:03:34.980 "buf_cache_size": 4294967295, 00:03:34.980 "dif_insert_or_strip": false, 00:03:34.980 "zcopy": false, 00:03:34.980 "c2h_success": true, 00:03:34.980 "sock_priority": 0, 00:03:34.980 "abort_timeout_sec": 1, 00:03:34.980 "ack_timeout": 0, 00:03:34.980 "data_wr_pool_size": 0 00:03:34.980 } 00:03:34.980 } 00:03:34.980 ] 00:03:34.980 }, 00:03:34.980 { 00:03:34.980 "subsystem": "iscsi", 00:03:34.980 "config": [ 00:03:34.980 { 00:03:34.980 "method": "iscsi_set_options", 00:03:34.980 "params": { 00:03:34.980 "node_base": "iqn.2016-06.io.spdk", 00:03:34.980 "max_sessions": 128, 00:03:34.980 "max_connections_per_session": 2, 00:03:34.980 "max_queue_depth": 64, 00:03:34.980 "default_time2wait": 2, 00:03:34.980 "default_time2retain": 20, 00:03:34.980 "first_burst_length": 8192, 00:03:34.980 "immediate_data": true, 00:03:34.980 "allow_duplicated_isid": false, 00:03:34.980 "error_recovery_level": 0, 00:03:34.980 "nop_timeout": 60, 00:03:34.980 "nop_in_interval": 30, 00:03:34.980 "disable_chap": false, 00:03:34.980 "require_chap": false, 00:03:34.980 "mutual_chap": false, 00:03:34.980 "chap_group": 0, 00:03:34.980 "max_large_datain_per_connection": 64, 00:03:34.980 "max_r2t_per_connection": 4, 00:03:34.980 "pdu_pool_size": 36864, 00:03:34.980 "immediate_data_pool_size": 16384, 00:03:34.980 "data_out_pool_size": 2048 00:03:34.980 } 00:03:34.980 } 00:03:34.980 ] 00:03:34.980 } 00:03:34.980 ] 00:03:34.980 } 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2716958 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2716958 ']' 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2716958 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716958 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716958' 00:03:34.980 killing process with pid 2716958 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2716958 00:03:34.980 09:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2716958 00:03:35.239 09:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2716978 00:03:35.239 09:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.239 09:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2716978 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2716978 ']' 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2716978 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2716978 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2716978' 00:03:40.509 killing process with pid 2716978 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2716978 00:03:40.509 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2716978 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:40.769 00:03:40.769 real 0m6.291s 00:03:40.769 user 0m5.973s 00:03:40.769 sys 0m0.607s 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:40.769 ************************************ 00:03:40.769 END TEST skip_rpc_with_json 00:03:40.769 ************************************ 00:03:40.769 09:36:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:40.769 09:36:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.769 09:36:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.769 09:36:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.769 ************************************ 00:03:40.769 START TEST skip_rpc_with_delay 00:03:40.769 ************************************ 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:40.769 09:36:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:40.769 [2024-11-20 09:36:04.005777] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:40.769 09:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:40.769 09:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:40.769 09:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:40.769 09:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:40.769 00:03:40.769 real 0m0.068s 00:03:40.769 user 0m0.041s 00:03:40.769 sys 0m0.027s 00:03:40.769 09:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.769 09:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:40.769 ************************************ 00:03:40.769 END TEST skip_rpc_with_delay 00:03:40.769 ************************************ 00:03:40.769 09:36:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:40.769 09:36:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:40.769 09:36:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:40.769 09:36:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.769 09:36:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.769 09:36:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.769 ************************************ 00:03:40.769 START TEST exit_on_failed_rpc_init 00:03:40.769 ************************************ 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2718074 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2718074 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2718074 ']' 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:40.769 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.029 [2024-11-20 09:36:04.146328] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:03:41.029 [2024-11-20 09:36:04.146375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718074 ] 00:03:41.029 [2024-11-20 09:36:04.222417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.029 [2024-11-20 09:36:04.265401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:41.288 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:41.288 [2024-11-20 09:36:04.542762] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:03:41.289 [2024-11-20 09:36:04.542809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718302 ] 00:03:41.289 [2024-11-20 09:36:04.616337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.548 [2024-11-20 09:36:04.657521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:41.548 [2024-11-20 09:36:04.657576] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:41.548 [2024-11-20 09:36:04.657586] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:41.548 [2024-11-20 09:36:04.657591] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2718074 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2718074 ']' 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2718074 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718074 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718074' 00:03:41.548 killing process with pid 2718074 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2718074 00:03:41.548 09:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2718074 00:03:41.807 00:03:41.808 real 0m0.962s 00:03:41.808 user 0m1.012s 00:03:41.808 sys 0m0.403s 00:03:41.808 09:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.808 09:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.808 ************************************ 00:03:41.808 END TEST exit_on_failed_rpc_init 00:03:41.808 ************************************ 00:03:41.808 09:36:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:41.808 00:03:41.808 real 0m13.143s 00:03:41.808 user 0m12.371s 00:03:41.808 sys 0m1.585s 00:03:41.808 09:36:05 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.808 09:36:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.808 ************************************ 00:03:41.808 END TEST skip_rpc 00:03:41.808 ************************************ 00:03:41.808 09:36:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:41.808 09:36:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.808 09:36:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.808 09:36:05 -- common/autotest_common.sh@10 -- # set +x 00:03:42.067 ************************************ 00:03:42.067 START TEST rpc_client 00:03:42.067 ************************************ 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:42.067 * Looking for test storage... 00:03:42.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1703 -- # lcov --version 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.067 09:36:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:03:42.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.067 --rc genhtml_branch_coverage=1 00:03:42.067 --rc genhtml_function_coverage=1 00:03:42.067 --rc genhtml_legend=1 00:03:42.067 --rc geninfo_all_blocks=1 00:03:42.067 --rc geninfo_unexecuted_blocks=1 00:03:42.067 00:03:42.067 ' 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:03:42.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.067 --rc genhtml_branch_coverage=1 00:03:42.067 --rc genhtml_function_coverage=1 00:03:42.067 --rc genhtml_legend=1 00:03:42.067 --rc geninfo_all_blocks=1 00:03:42.067 --rc geninfo_unexecuted_blocks=1 00:03:42.067 00:03:42.067 ' 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:03:42.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.067 --rc genhtml_branch_coverage=1 00:03:42.067 --rc genhtml_function_coverage=1 00:03:42.067 --rc genhtml_legend=1 00:03:42.067 --rc geninfo_all_blocks=1 00:03:42.067 --rc geninfo_unexecuted_blocks=1 00:03:42.067 00:03:42.067 ' 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:03:42.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.067 --rc genhtml_branch_coverage=1 00:03:42.067 --rc genhtml_function_coverage=1 00:03:42.067 --rc genhtml_legend=1 00:03:42.067 --rc geninfo_all_blocks=1 00:03:42.067 --rc geninfo_unexecuted_blocks=1 00:03:42.067 00:03:42.067 ' 00:03:42.067 09:36:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:42.067 OK 00:03:42.067 09:36:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:42.067 00:03:42.067 real 0m0.201s 00:03:42.067 user 0m0.114s 00:03:42.067 sys 0m0.100s 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.067 09:36:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:42.067 ************************************ 00:03:42.067 END TEST rpc_client 00:03:42.067 ************************************ 00:03:42.068 09:36:05 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:42.068 09:36:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.068 09:36:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.068 09:36:05 -- common/autotest_common.sh@10 -- # set +x 00:03:42.328 ************************************ 00:03:42.328 START TEST json_config 00:03:42.328 ************************************ 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1703 -- # lcov --version 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:03:42.328 09:36:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.328 09:36:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.328 09:36:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.328 09:36:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.328 09:36:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.328 09:36:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.328 09:36:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.328 09:36:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.328 09:36:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.328 09:36:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.328 09:36:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.328 09:36:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:42.328 09:36:05 json_config -- scripts/common.sh@345 -- # : 1 00:03:42.328 09:36:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.328 09:36:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.328 09:36:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:42.328 09:36:05 json_config -- scripts/common.sh@353 -- # local d=1 00:03:42.328 09:36:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.328 09:36:05 json_config -- scripts/common.sh@355 -- # echo 1 00:03:42.328 09:36:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.328 09:36:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:42.328 09:36:05 json_config -- scripts/common.sh@353 -- # local d=2 00:03:42.328 09:36:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.328 09:36:05 json_config -- scripts/common.sh@355 -- # echo 2 00:03:42.328 09:36:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.328 09:36:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.328 09:36:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.328 09:36:05 json_config -- scripts/common.sh@368 -- # return 0 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:03:42.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.328 --rc genhtml_branch_coverage=1 00:03:42.328 --rc genhtml_function_coverage=1 00:03:42.328 --rc genhtml_legend=1 00:03:42.328 --rc geninfo_all_blocks=1 00:03:42.328 --rc geninfo_unexecuted_blocks=1 00:03:42.328 00:03:42.328 ' 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:03:42.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.328 --rc genhtml_branch_coverage=1 00:03:42.328 --rc genhtml_function_coverage=1 00:03:42.328 --rc genhtml_legend=1 00:03:42.328 --rc geninfo_all_blocks=1 00:03:42.328 --rc geninfo_unexecuted_blocks=1 00:03:42.328 00:03:42.328 ' 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:03:42.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.328 --rc genhtml_branch_coverage=1 00:03:42.328 --rc genhtml_function_coverage=1 00:03:42.328 --rc genhtml_legend=1 00:03:42.328 --rc geninfo_all_blocks=1 00:03:42.328 --rc geninfo_unexecuted_blocks=1 00:03:42.328 00:03:42.328 ' 00:03:42.328 09:36:05 json_config -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:03:42.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.328 --rc genhtml_branch_coverage=1 00:03:42.328 --rc genhtml_function_coverage=1 00:03:42.328 --rc genhtml_legend=1 00:03:42.328 --rc geninfo_all_blocks=1 00:03:42.328 --rc geninfo_unexecuted_blocks=1 00:03:42.328 00:03:42.328 ' 00:03:42.328 09:36:05 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:42.328 09:36:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:42.328 09:36:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.328 09:36:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.328 09:36:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.328 09:36:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:42.329 09:36:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:42.329 09:36:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.329 09:36:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.329 09:36:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.329 09:36:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.329 09:36:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.329 09:36:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.329 09:36:05 json_config -- paths/export.sh@5 -- # export PATH 00:03:42.329 09:36:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@51 -- # : 0 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:42.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:42.329 09:36:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:42.329 INFO: JSON configuration test init 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.329 09:36:05 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:42.329 09:36:05 json_config -- json_config/common.sh@9 -- # local app=target 00:03:42.329 09:36:05 json_config -- json_config/common.sh@10 -- # shift 00:03:42.329 09:36:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:42.329 09:36:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:42.329 09:36:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:42.329 09:36:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.329 09:36:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.329 09:36:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2718508 00:03:42.329 09:36:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:42.329 Waiting for target to run... 00:03:42.329 09:36:05 json_config -- json_config/common.sh@25 -- # waitforlisten 2718508 /var/tmp/spdk_tgt.sock 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@835 -- # '[' -z 2718508 ']' 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:42.329 09:36:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:42.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:42.329 09:36:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.588 [2024-11-20 09:36:05.669598] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:03:42.588 [2024-11-20 09:36:05.669651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2718508 ] 00:03:42.847 [2024-11-20 09:36:06.113178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.847 [2024-11-20 09:36:06.167889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.415 09:36:06 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:43.415 09:36:06 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:43.415 09:36:06 json_config -- json_config/common.sh@26 -- # echo '' 00:03:43.415 00:03:43.415 09:36:06 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:43.416 09:36:06 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:43.416 09:36:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.416 09:36:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.416 09:36:06 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:43.416 09:36:06 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:43.416 09:36:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:43.416 09:36:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.416 09:36:06 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:43.416 09:36:06 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:43.416 09:36:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:46.706 09:36:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.706 09:36:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:46.706 09:36:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@54 -- # sort 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:46.706 09:36:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:46.706 09:36:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:46.706 09:36:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.706 09:36:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:46.706 09:36:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:46.706 09:36:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:46.966 MallocForNvmf0 00:03:46.966 09:36:10 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:46.966 09:36:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:46.966 MallocForNvmf1 00:03:46.966 09:36:10 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:46.966 09:36:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:47.224 [2024-11-20 09:36:10.460972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:47.224 09:36:10 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:47.224 09:36:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:47.484 09:36:10 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:47.484 09:36:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:47.743 09:36:10 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:47.743 09:36:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:47.743 09:36:11 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:47.743 09:36:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:48.002 [2024-11-20 09:36:11.207318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:48.002 09:36:11 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:48.002 09:36:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.002 09:36:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.002 09:36:11 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:48.002 09:36:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.002 09:36:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.002 09:36:11 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:48.002 09:36:11 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:48.002 09:36:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:48.261 MallocBdevForConfigChangeCheck 00:03:48.261 09:36:11 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:48.261 09:36:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.261 09:36:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.261 09:36:11 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:48.261 09:36:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:48.829 09:36:11 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:48.829 INFO: shutting down applications... 00:03:48.829 09:36:11 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:48.829 09:36:11 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:48.829 09:36:11 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:48.829 09:36:11 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:50.284 Calling clear_iscsi_subsystem 00:03:50.284 Calling clear_nvmf_subsystem 00:03:50.284 Calling clear_nbd_subsystem 00:03:50.284 Calling clear_ublk_subsystem 00:03:50.284 Calling clear_vhost_blk_subsystem 00:03:50.284 Calling clear_vhost_scsi_subsystem 00:03:50.284 Calling clear_bdev_subsystem 00:03:50.284 09:36:13 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:50.284 09:36:13 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:50.284 09:36:13 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:50.284 09:36:13 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:50.284 09:36:13 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:50.284 09:36:13 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:50.543 09:36:13 json_config -- json_config/json_config.sh@352 -- # break 00:03:50.543 09:36:13 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:50.543 09:36:13 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:50.543 09:36:13 json_config -- json_config/common.sh@31 -- # local app=target 00:03:50.543 09:36:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:50.543 09:36:13 json_config -- json_config/common.sh@35 -- # [[ -n 2718508 ]] 00:03:50.543 09:36:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2718508 00:03:50.543 09:36:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:50.543 09:36:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:50.543 09:36:13 json_config -- json_config/common.sh@41 -- # kill -0 2718508 00:03:50.543 09:36:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:51.112 09:36:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:51.112 09:36:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:51.112 09:36:14 json_config -- json_config/common.sh@41 -- # kill -0 2718508 00:03:51.112 09:36:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:51.112 09:36:14 json_config -- json_config/common.sh@43 -- # break 00:03:51.112 09:36:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:51.112 09:36:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:51.112 SPDK target shutdown done 00:03:51.112 09:36:14 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:51.112 INFO: relaunching applications... 00:03:51.112 09:36:14 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.112 09:36:14 json_config -- json_config/common.sh@9 -- # local app=target 00:03:51.112 09:36:14 json_config -- json_config/common.sh@10 -- # shift 00:03:51.112 09:36:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:51.112 09:36:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:51.112 09:36:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:51.112 09:36:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:51.112 09:36:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:51.112 09:36:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2720560 00:03:51.112 09:36:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:51.112 Waiting for target to run... 00:03:51.112 09:36:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.112 09:36:14 json_config -- json_config/common.sh@25 -- # waitforlisten 2720560 /var/tmp/spdk_tgt.sock 00:03:51.113 09:36:14 json_config -- common/autotest_common.sh@835 -- # '[' -z 2720560 ']' 00:03:51.113 09:36:14 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:51.113 09:36:14 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:51.113 09:36:14 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:51.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:51.113 09:36:14 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:51.113 09:36:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.113 [2024-11-20 09:36:14.347193] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:03:51.113 [2024-11-20 09:36:14.347266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2720560 ] 00:03:51.681 [2024-11-20 09:36:14.803625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.681 [2024-11-20 09:36:14.858393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.972 [2024-11-20 09:36:17.892839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.972 [2024-11-20 09:36:17.925203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.540 09:36:18 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.540 09:36:18 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:55.540 09:36:18 json_config -- json_config/common.sh@26 -- # echo '' 00:03:55.540 00:03:55.540 09:36:18 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:55.540 09:36:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:55.540 INFO: Checking if target configuration is the same... 00:03:55.540 09:36:18 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.540 09:36:18 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:55.540 09:36:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.540 + '[' 2 -ne 2 ']' 00:03:55.540 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:55.540 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:55.540 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.540 +++ basename /dev/fd/62 00:03:55.540 ++ mktemp /tmp/62.XXX 00:03:55.540 + tmp_file_1=/tmp/62.3nP 00:03:55.540 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.540 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.540 + tmp_file_2=/tmp/spdk_tgt_config.json.XNd 00:03:55.540 + ret=0 00:03:55.540 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:55.799 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:55.799 + diff -u /tmp/62.3nP /tmp/spdk_tgt_config.json.XNd 00:03:55.799 + echo 'INFO: JSON config files are the same' 00:03:55.799 INFO: JSON config files are the same 00:03:55.799 + rm /tmp/62.3nP /tmp/spdk_tgt_config.json.XNd 00:03:55.799 + exit 0 00:03:55.799 09:36:18 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:55.799 09:36:18 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:55.799 INFO: changing configuration and checking if this can be detected... 00:03:55.799 09:36:18 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:55.799 09:36:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.060 09:36:19 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:56.060 09:36:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.060 09:36:19 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.060 + '[' 2 -ne 2 ']' 00:03:56.060 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:56.060 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:56.060 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.060 +++ basename /dev/fd/62 00:03:56.060 ++ mktemp /tmp/62.XXX 00:03:56.060 + tmp_file_1=/tmp/62.ags 00:03:56.060 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.060 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.060 + tmp_file_2=/tmp/spdk_tgt_config.json.8WQ 00:03:56.060 + ret=0 00:03:56.060 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.320 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.320 + diff -u /tmp/62.ags /tmp/spdk_tgt_config.json.8WQ 00:03:56.320 + ret=1 00:03:56.320 + echo '=== Start of file: /tmp/62.ags ===' 00:03:56.320 + cat /tmp/62.ags 00:03:56.320 + echo '=== End of file: /tmp/62.ags ===' 00:03:56.320 + echo '' 00:03:56.320 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8WQ ===' 00:03:56.320 + cat /tmp/spdk_tgt_config.json.8WQ 00:03:56.320 + echo '=== End of file: /tmp/spdk_tgt_config.json.8WQ ===' 00:03:56.320 + echo '' 00:03:56.320 + rm /tmp/62.ags /tmp/spdk_tgt_config.json.8WQ 00:03:56.320 + exit 1 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:56.320 INFO: configuration change detected. 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:56.320 09:36:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.320 09:36:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@324 -- # [[ -n 2720560 ]] 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:56.320 09:36:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.320 09:36:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:56.320 09:36:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.320 09:36:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.320 09:36:19 json_config -- json_config/json_config.sh@330 -- # killprocess 2720560 00:03:56.320 09:36:19 json_config -- common/autotest_common.sh@954 -- # '[' -z 2720560 ']' 00:03:56.579 09:36:19 json_config -- common/autotest_common.sh@958 -- # kill -0 2720560 00:03:56.579 09:36:19 json_config -- common/autotest_common.sh@959 -- # uname 00:03:56.579 09:36:19 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:56.579 09:36:19 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2720560 00:03:56.579 09:36:19 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:56.579 09:36:19 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:56.579 09:36:19 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2720560' 00:03:56.579 killing process with pid 2720560 00:03:56.579 09:36:19 json_config -- common/autotest_common.sh@973 -- # kill 2720560 00:03:56.579 09:36:19 json_config -- common/autotest_common.sh@978 -- # wait 2720560 00:03:57.958 09:36:21 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.958 09:36:21 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:57.958 09:36:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.958 09:36:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.958 09:36:21 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:57.958 09:36:21 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:57.958 INFO: Success 00:03:57.958 00:03:57.958 real 0m15.775s 00:03:57.958 user 0m16.219s 00:03:57.958 sys 0m2.749s 00:03:57.958 09:36:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.958 09:36:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.958 ************************************ 00:03:57.958 END TEST json_config 00:03:57.958 ************************************ 00:03:57.958 09:36:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:57.958 09:36:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.958 09:36:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.958 09:36:21 -- common/autotest_common.sh@10 -- # set +x 00:03:57.958 ************************************ 00:03:57.958 START TEST json_config_extra_key 00:03:57.958 ************************************ 00:03:57.958 09:36:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:58.218 09:36:21 json_config_extra_key -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:03:58.218 09:36:21 json_config_extra_key -- common/autotest_common.sh@1703 -- # lcov --version 00:03:58.218 09:36:21 json_config_extra_key -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:03:58.218 09:36:21 json_config_extra_key -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.218 09:36:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:58.218 09:36:21 json_config_extra_key -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.218 09:36:21 json_config_extra_key -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:03:58.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.218 --rc genhtml_branch_coverage=1 00:03:58.218 --rc genhtml_function_coverage=1 00:03:58.218 --rc genhtml_legend=1 00:03:58.218 --rc geninfo_all_blocks=1 00:03:58.218 --rc geninfo_unexecuted_blocks=1 00:03:58.218 00:03:58.218 ' 00:03:58.218 09:36:21 json_config_extra_key -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:03:58.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.218 --rc genhtml_branch_coverage=1 00:03:58.218 --rc genhtml_function_coverage=1 00:03:58.218 --rc genhtml_legend=1 00:03:58.218 --rc geninfo_all_blocks=1 00:03:58.218 --rc geninfo_unexecuted_blocks=1 00:03:58.218 00:03:58.218 ' 00:03:58.218 09:36:21 json_config_extra_key -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:03:58.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.218 --rc genhtml_branch_coverage=1 00:03:58.218 --rc genhtml_function_coverage=1 00:03:58.218 --rc genhtml_legend=1 00:03:58.218 --rc geninfo_all_blocks=1 00:03:58.218 --rc geninfo_unexecuted_blocks=1 00:03:58.218 00:03:58.218 ' 00:03:58.218 09:36:21 json_config_extra_key -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:03:58.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.218 --rc genhtml_branch_coverage=1 00:03:58.218 --rc genhtml_function_coverage=1 00:03:58.218 --rc genhtml_legend=1 00:03:58.218 --rc geninfo_all_blocks=1 00:03:58.218 --rc geninfo_unexecuted_blocks=1 00:03:58.218 00:03:58.218 ' 00:03:58.218 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:58.218 09:36:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:58.219 09:36:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:58.219 09:36:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:58.219 09:36:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:58.219 09:36:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:58.219 09:36:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.219 09:36:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.219 09:36:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.219 09:36:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:58.219 09:36:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:58.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:58.219 09:36:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:58.219 INFO: launching applications... 00:03:58.219 09:36:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2721844 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.219 Waiting for target to run... 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2721844 /var/tmp/spdk_tgt.sock 00:03:58.219 09:36:21 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2721844 ']' 00:03:58.219 09:36:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:58.219 09:36:21 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.219 09:36:21 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:58.219 09:36:21 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.219 09:36:21 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:58.219 09:36:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:58.219 [2024-11-20 09:36:21.509469] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:03:58.219 [2024-11-20 09:36:21.509517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2721844 ] 00:03:58.478 [2024-11-20 09:36:21.794030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.737 [2024-11-20 09:36:21.829111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.305 09:36:22 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:59.305 09:36:22 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:59.305 09:36:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:59.305 00:03:59.305 09:36:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:59.305 INFO: shutting down applications... 00:03:59.305 09:36:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:59.305 09:36:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:59.305 09:36:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:59.305 09:36:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2721844 ]] 00:03:59.305 09:36:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2721844 00:03:59.305 09:36:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:59.305 09:36:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.305 09:36:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2721844 00:03:59.305 09:36:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:59.564 09:36:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:59.564 09:36:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.564 09:36:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2721844 00:03:59.564 09:36:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:59.564 09:36:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:59.564 09:36:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:59.564 09:36:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:59.564 SPDK target shutdown done 00:03:59.564 09:36:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:59.564 Success 00:03:59.564 00:03:59.564 real 0m1.589s 00:03:59.564 user 0m1.383s 00:03:59.564 sys 0m0.407s 00:03:59.564 09:36:22 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.564 09:36:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:59.564 ************************************ 00:03:59.564 END TEST json_config_extra_key 00:03:59.564 ************************************ 00:03:59.823 09:36:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:59.823 09:36:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.823 09:36:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.823 09:36:22 -- common/autotest_common.sh@10 -- # set +x 00:03:59.823 ************************************ 00:03:59.823 START TEST alias_rpc 00:03:59.823 ************************************ 00:03:59.823 09:36:22 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:59.823 * Looking for test storage... 00:03:59.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:59.823 09:36:23 alias_rpc -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:03:59.823 09:36:23 alias_rpc -- common/autotest_common.sh@1703 -- # lcov --version 00:03:59.823 09:36:23 alias_rpc -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:03:59.823 09:36:23 alias_rpc -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.823 09:36:23 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:59.823 09:36:23 alias_rpc -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.823 09:36:23 alias_rpc -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:03:59.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.823 --rc genhtml_branch_coverage=1 00:03:59.823 --rc genhtml_function_coverage=1 00:03:59.823 --rc genhtml_legend=1 00:03:59.823 --rc geninfo_all_blocks=1 00:03:59.823 --rc geninfo_unexecuted_blocks=1 00:03:59.823 00:03:59.823 ' 00:03:59.823 09:36:23 alias_rpc -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:03:59.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.823 --rc genhtml_branch_coverage=1 00:03:59.823 --rc genhtml_function_coverage=1 00:03:59.823 --rc genhtml_legend=1 00:03:59.823 --rc geninfo_all_blocks=1 00:03:59.823 --rc geninfo_unexecuted_blocks=1 00:03:59.823 00:03:59.823 ' 00:03:59.823 09:36:23 alias_rpc -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:03:59.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.823 --rc genhtml_branch_coverage=1 00:03:59.823 --rc genhtml_function_coverage=1 00:03:59.823 --rc genhtml_legend=1 00:03:59.823 --rc geninfo_all_blocks=1 00:03:59.823 --rc geninfo_unexecuted_blocks=1 00:03:59.823 00:03:59.823 ' 00:03:59.823 09:36:23 alias_rpc -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:03:59.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.823 --rc genhtml_branch_coverage=1 00:03:59.823 --rc genhtml_function_coverage=1 00:03:59.823 --rc genhtml_legend=1 00:03:59.823 --rc geninfo_all_blocks=1 00:03:59.823 --rc geninfo_unexecuted_blocks=1 00:03:59.823 00:03:59.823 ' 00:03:59.823 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:59.824 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2722135 00:03:59.824 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2722135 00:03:59.824 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.824 09:36:23 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2722135 ']' 00:03:59.824 09:36:23 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.824 09:36:23 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.824 09:36:23 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.824 09:36:23 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.824 09:36:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.083 [2024-11-20 09:36:23.159907] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:00.083 [2024-11-20 09:36:23.159959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722135 ] 00:04:00.083 [2024-11-20 09:36:23.235805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.083 [2024-11-20 09:36:23.278993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.343 09:36:23 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.343 09:36:23 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:00.343 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:00.602 09:36:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2722135 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2722135 ']' 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2722135 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2722135 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2722135' 00:04:00.602 killing process with pid 2722135 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@973 -- # kill 2722135 00:04:00.602 09:36:23 alias_rpc -- common/autotest_common.sh@978 -- # wait 2722135 00:04:00.862 00:04:00.862 real 0m1.126s 00:04:00.862 user 0m1.120s 00:04:00.862 sys 0m0.444s 00:04:00.862 09:36:24 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.862 09:36:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.862 ************************************ 00:04:00.862 END TEST alias_rpc 00:04:00.862 ************************************ 00:04:00.862 09:36:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:00.862 09:36:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:00.862 09:36:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.862 09:36:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.862 09:36:24 -- common/autotest_common.sh@10 -- # set +x 00:04:00.862 ************************************ 00:04:00.862 START TEST spdkcli_tcp 00:04:00.862 ************************************ 00:04:00.862 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.122 * Looking for test storage... 00:04:01.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1703 -- # lcov --version 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.122 09:36:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:01.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.122 --rc genhtml_branch_coverage=1 00:04:01.122 --rc genhtml_function_coverage=1 00:04:01.122 --rc genhtml_legend=1 00:04:01.122 --rc geninfo_all_blocks=1 00:04:01.122 --rc geninfo_unexecuted_blocks=1 00:04:01.122 00:04:01.122 ' 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:01.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.122 --rc genhtml_branch_coverage=1 00:04:01.122 --rc genhtml_function_coverage=1 00:04:01.122 --rc genhtml_legend=1 00:04:01.122 --rc geninfo_all_blocks=1 00:04:01.122 --rc geninfo_unexecuted_blocks=1 00:04:01.122 00:04:01.122 ' 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:01.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.122 --rc genhtml_branch_coverage=1 00:04:01.122 --rc genhtml_function_coverage=1 00:04:01.122 --rc genhtml_legend=1 00:04:01.122 --rc geninfo_all_blocks=1 00:04:01.122 --rc geninfo_unexecuted_blocks=1 00:04:01.122 00:04:01.122 ' 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:01.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.122 --rc genhtml_branch_coverage=1 00:04:01.122 --rc genhtml_function_coverage=1 00:04:01.122 --rc genhtml_legend=1 00:04:01.122 --rc geninfo_all_blocks=1 00:04:01.122 --rc geninfo_unexecuted_blocks=1 00:04:01.122 00:04:01.122 ' 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2722422 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:01.122 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2722422 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2722422 ']' 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.122 09:36:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.123 09:36:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.123 09:36:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.123 [2024-11-20 09:36:24.360898] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:01.123 [2024-11-20 09:36:24.360959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722422 ] 00:04:01.123 [2024-11-20 09:36:24.436490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:01.382 [2024-11-20 09:36:24.480395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.382 [2024-11-20 09:36:24.480396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.382 09:36:24 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.382 09:36:24 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:01.382 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2722439 00:04:01.382 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:01.382 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:01.642 [ 00:04:01.642 "bdev_malloc_delete", 00:04:01.642 "bdev_malloc_create", 00:04:01.642 "bdev_null_resize", 00:04:01.642 "bdev_null_delete", 00:04:01.642 "bdev_null_create", 00:04:01.642 "bdev_nvme_cuse_unregister", 00:04:01.642 "bdev_nvme_cuse_register", 00:04:01.642 "bdev_opal_new_user", 00:04:01.642 "bdev_opal_set_lock_state", 00:04:01.642 "bdev_opal_delete", 00:04:01.642 "bdev_opal_get_info", 00:04:01.642 "bdev_opal_create", 00:04:01.642 "bdev_nvme_opal_revert", 00:04:01.642 "bdev_nvme_opal_init", 00:04:01.642 "bdev_nvme_send_cmd", 00:04:01.642 "bdev_nvme_set_keys", 00:04:01.642 "bdev_nvme_get_path_iostat", 00:04:01.642 "bdev_nvme_get_mdns_discovery_info", 00:04:01.642 "bdev_nvme_stop_mdns_discovery", 00:04:01.642 "bdev_nvme_start_mdns_discovery", 00:04:01.642 "bdev_nvme_set_multipath_policy", 00:04:01.642 "bdev_nvme_set_preferred_path", 00:04:01.642 "bdev_nvme_get_io_paths", 00:04:01.642 "bdev_nvme_remove_error_injection", 00:04:01.642 "bdev_nvme_add_error_injection", 00:04:01.642 "bdev_nvme_get_discovery_info", 00:04:01.642 "bdev_nvme_stop_discovery", 00:04:01.642 "bdev_nvme_start_discovery", 00:04:01.642 "bdev_nvme_get_controller_health_info", 00:04:01.642 "bdev_nvme_disable_controller", 00:04:01.642 "bdev_nvme_enable_controller", 00:04:01.642 "bdev_nvme_reset_controller", 00:04:01.642 "bdev_nvme_get_transport_statistics", 00:04:01.642 "bdev_nvme_apply_firmware", 00:04:01.642 "bdev_nvme_detach_controller", 00:04:01.642 "bdev_nvme_get_controllers", 00:04:01.642 "bdev_nvme_attach_controller", 00:04:01.642 "bdev_nvme_set_hotplug", 00:04:01.642 "bdev_nvme_set_options", 00:04:01.642 "bdev_passthru_delete", 00:04:01.642 "bdev_passthru_create", 00:04:01.642 "bdev_lvol_set_parent_bdev", 00:04:01.642 "bdev_lvol_set_parent", 00:04:01.642 "bdev_lvol_check_shallow_copy", 00:04:01.642 "bdev_lvol_start_shallow_copy", 00:04:01.642 "bdev_lvol_grow_lvstore", 00:04:01.642 "bdev_lvol_get_lvols", 00:04:01.642 "bdev_lvol_get_lvstores", 00:04:01.642 "bdev_lvol_delete", 00:04:01.642 "bdev_lvol_set_read_only", 00:04:01.642 "bdev_lvol_resize", 00:04:01.642 "bdev_lvol_decouple_parent", 00:04:01.642 "bdev_lvol_inflate", 00:04:01.642 "bdev_lvol_rename", 00:04:01.642 "bdev_lvol_clone_bdev", 00:04:01.642 "bdev_lvol_clone", 00:04:01.642 "bdev_lvol_snapshot", 00:04:01.642 "bdev_lvol_create", 00:04:01.642 "bdev_lvol_delete_lvstore", 00:04:01.642 "bdev_lvol_rename_lvstore", 00:04:01.642 "bdev_lvol_create_lvstore", 00:04:01.642 "bdev_raid_set_options", 00:04:01.642 "bdev_raid_remove_base_bdev", 00:04:01.642 "bdev_raid_add_base_bdev", 00:04:01.642 "bdev_raid_delete", 00:04:01.642 "bdev_raid_create", 00:04:01.642 "bdev_raid_get_bdevs", 00:04:01.642 "bdev_error_inject_error", 00:04:01.642 "bdev_error_delete", 00:04:01.642 "bdev_error_create", 00:04:01.642 "bdev_split_delete", 00:04:01.642 "bdev_split_create", 00:04:01.642 "bdev_delay_delete", 00:04:01.642 "bdev_delay_create", 00:04:01.642 "bdev_delay_update_latency", 00:04:01.642 "bdev_zone_block_delete", 00:04:01.642 "bdev_zone_block_create", 00:04:01.642 "blobfs_create", 00:04:01.642 "blobfs_detect", 00:04:01.642 "blobfs_set_cache_size", 00:04:01.642 "bdev_aio_delete", 00:04:01.642 "bdev_aio_rescan", 00:04:01.642 "bdev_aio_create", 00:04:01.642 "bdev_ftl_set_property", 00:04:01.642 "bdev_ftl_get_properties", 00:04:01.642 "bdev_ftl_get_stats", 00:04:01.642 "bdev_ftl_unmap", 00:04:01.642 "bdev_ftl_unload", 00:04:01.642 "bdev_ftl_delete", 00:04:01.642 "bdev_ftl_load", 00:04:01.642 "bdev_ftl_create", 00:04:01.642 "bdev_virtio_attach_controller", 00:04:01.642 "bdev_virtio_scsi_get_devices", 00:04:01.642 "bdev_virtio_detach_controller", 00:04:01.642 "bdev_virtio_blk_set_hotplug", 00:04:01.642 "bdev_iscsi_delete", 00:04:01.642 "bdev_iscsi_create", 00:04:01.642 "bdev_iscsi_set_options", 00:04:01.642 "accel_error_inject_error", 00:04:01.642 "ioat_scan_accel_module", 00:04:01.642 "dsa_scan_accel_module", 00:04:01.642 "iaa_scan_accel_module", 00:04:01.642 "vfu_virtio_create_fs_endpoint", 00:04:01.642 "vfu_virtio_create_scsi_endpoint", 00:04:01.642 "vfu_virtio_scsi_remove_target", 00:04:01.642 "vfu_virtio_scsi_add_target", 00:04:01.642 "vfu_virtio_create_blk_endpoint", 00:04:01.642 "vfu_virtio_delete_endpoint", 00:04:01.642 "keyring_file_remove_key", 00:04:01.642 "keyring_file_add_key", 00:04:01.642 "keyring_linux_set_options", 00:04:01.642 "fsdev_aio_delete", 00:04:01.642 "fsdev_aio_create", 00:04:01.642 "iscsi_get_histogram", 00:04:01.642 "iscsi_enable_histogram", 00:04:01.642 "iscsi_set_options", 00:04:01.642 "iscsi_get_auth_groups", 00:04:01.642 "iscsi_auth_group_remove_secret", 00:04:01.642 "iscsi_auth_group_add_secret", 00:04:01.642 "iscsi_delete_auth_group", 00:04:01.642 "iscsi_create_auth_group", 00:04:01.642 "iscsi_set_discovery_auth", 00:04:01.642 "iscsi_get_options", 00:04:01.642 "iscsi_target_node_request_logout", 00:04:01.642 "iscsi_target_node_set_redirect", 00:04:01.642 "iscsi_target_node_set_auth", 00:04:01.642 "iscsi_target_node_add_lun", 00:04:01.642 "iscsi_get_stats", 00:04:01.642 "iscsi_get_connections", 00:04:01.642 "iscsi_portal_group_set_auth", 00:04:01.642 "iscsi_start_portal_group", 00:04:01.642 "iscsi_delete_portal_group", 00:04:01.642 "iscsi_create_portal_group", 00:04:01.642 "iscsi_get_portal_groups", 00:04:01.642 "iscsi_delete_target_node", 00:04:01.642 "iscsi_target_node_remove_pg_ig_maps", 00:04:01.642 "iscsi_target_node_add_pg_ig_maps", 00:04:01.642 "iscsi_create_target_node", 00:04:01.642 "iscsi_get_target_nodes", 00:04:01.642 "iscsi_delete_initiator_group", 00:04:01.642 "iscsi_initiator_group_remove_initiators", 00:04:01.642 "iscsi_initiator_group_add_initiators", 00:04:01.642 "iscsi_create_initiator_group", 00:04:01.642 "iscsi_get_initiator_groups", 00:04:01.642 "nvmf_set_crdt", 00:04:01.642 "nvmf_set_config", 00:04:01.642 "nvmf_set_max_subsystems", 00:04:01.642 "nvmf_stop_mdns_prr", 00:04:01.642 "nvmf_publish_mdns_prr", 00:04:01.642 "nvmf_subsystem_get_listeners", 00:04:01.642 "nvmf_subsystem_get_qpairs", 00:04:01.642 "nvmf_subsystem_get_controllers", 00:04:01.642 "nvmf_get_stats", 00:04:01.642 "nvmf_get_transports", 00:04:01.642 "nvmf_create_transport", 00:04:01.642 "nvmf_get_targets", 00:04:01.642 "nvmf_delete_target", 00:04:01.642 "nvmf_create_target", 00:04:01.642 "nvmf_subsystem_allow_any_host", 00:04:01.642 "nvmf_subsystem_set_keys", 00:04:01.642 "nvmf_subsystem_remove_host", 00:04:01.642 "nvmf_subsystem_add_host", 00:04:01.642 "nvmf_ns_remove_host", 00:04:01.642 "nvmf_ns_add_host", 00:04:01.642 "nvmf_subsystem_remove_ns", 00:04:01.643 "nvmf_subsystem_set_ns_ana_group", 00:04:01.643 "nvmf_subsystem_add_ns", 00:04:01.643 "nvmf_subsystem_listener_set_ana_state", 00:04:01.643 "nvmf_discovery_get_referrals", 00:04:01.643 "nvmf_discovery_remove_referral", 00:04:01.643 "nvmf_discovery_add_referral", 00:04:01.643 "nvmf_subsystem_remove_listener", 00:04:01.643 "nvmf_subsystem_add_listener", 00:04:01.643 "nvmf_delete_subsystem", 00:04:01.643 "nvmf_create_subsystem", 00:04:01.643 "nvmf_get_subsystems", 00:04:01.643 "env_dpdk_get_mem_stats", 00:04:01.643 "nbd_get_disks", 00:04:01.643 "nbd_stop_disk", 00:04:01.643 "nbd_start_disk", 00:04:01.643 "ublk_recover_disk", 00:04:01.643 "ublk_get_disks", 00:04:01.643 "ublk_stop_disk", 00:04:01.643 "ublk_start_disk", 00:04:01.643 "ublk_destroy_target", 00:04:01.643 "ublk_create_target", 00:04:01.643 "virtio_blk_create_transport", 00:04:01.643 "virtio_blk_get_transports", 00:04:01.643 "vhost_controller_set_coalescing", 00:04:01.643 "vhost_get_controllers", 00:04:01.643 "vhost_delete_controller", 00:04:01.643 "vhost_create_blk_controller", 00:04:01.643 "vhost_scsi_controller_remove_target", 00:04:01.643 "vhost_scsi_controller_add_target", 00:04:01.643 "vhost_start_scsi_controller", 00:04:01.643 "vhost_create_scsi_controller", 00:04:01.643 "thread_set_cpumask", 00:04:01.643 "scheduler_set_options", 00:04:01.643 "framework_get_governor", 00:04:01.643 "framework_get_scheduler", 00:04:01.643 "framework_set_scheduler", 00:04:01.643 "framework_get_reactors", 00:04:01.643 "thread_get_io_channels", 00:04:01.643 "thread_get_pollers", 00:04:01.643 "thread_get_stats", 00:04:01.643 "framework_monitor_context_switch", 00:04:01.643 "spdk_kill_instance", 00:04:01.643 "log_enable_timestamps", 00:04:01.643 "log_get_flags", 00:04:01.643 "log_clear_flag", 00:04:01.643 "log_set_flag", 00:04:01.643 "log_get_level", 00:04:01.643 "log_set_level", 00:04:01.643 "log_get_print_level", 00:04:01.643 "log_set_print_level", 00:04:01.643 "framework_enable_cpumask_locks", 00:04:01.643 "framework_disable_cpumask_locks", 00:04:01.643 "framework_wait_init", 00:04:01.643 "framework_start_init", 00:04:01.643 "scsi_get_devices", 00:04:01.643 "bdev_get_histogram", 00:04:01.643 "bdev_enable_histogram", 00:04:01.643 "bdev_set_qos_limit", 00:04:01.643 "bdev_set_qd_sampling_period", 00:04:01.643 "bdev_get_bdevs", 00:04:01.643 "bdev_reset_iostat", 00:04:01.643 "bdev_get_iostat", 00:04:01.643 "bdev_examine", 00:04:01.643 "bdev_wait_for_examine", 00:04:01.643 "bdev_set_options", 00:04:01.643 "accel_get_stats", 00:04:01.643 "accel_set_options", 00:04:01.643 "accel_set_driver", 00:04:01.643 "accel_crypto_key_destroy", 00:04:01.643 "accel_crypto_keys_get", 00:04:01.643 "accel_crypto_key_create", 00:04:01.643 "accel_assign_opc", 00:04:01.643 "accel_get_module_info", 00:04:01.643 "accel_get_opc_assignments", 00:04:01.643 "vmd_rescan", 00:04:01.643 "vmd_remove_device", 00:04:01.643 "vmd_enable", 00:04:01.643 "sock_get_default_impl", 00:04:01.643 "sock_set_default_impl", 00:04:01.643 "sock_impl_set_options", 00:04:01.643 "sock_impl_get_options", 00:04:01.643 "iobuf_get_stats", 00:04:01.643 "iobuf_set_options", 00:04:01.643 "keyring_get_keys", 00:04:01.643 "vfu_tgt_set_base_path", 00:04:01.643 "framework_get_pci_devices", 00:04:01.643 "framework_get_config", 00:04:01.643 "framework_get_subsystems", 00:04:01.643 "fsdev_set_opts", 00:04:01.643 "fsdev_get_opts", 00:04:01.643 "trace_get_info", 00:04:01.643 "trace_get_tpoint_group_mask", 00:04:01.643 "trace_disable_tpoint_group", 00:04:01.643 "trace_enable_tpoint_group", 00:04:01.643 "trace_clear_tpoint_mask", 00:04:01.643 "trace_set_tpoint_mask", 00:04:01.643 "notify_get_notifications", 00:04:01.643 "notify_get_types", 00:04:01.643 "spdk_get_version", 00:04:01.643 "rpc_get_methods" 00:04:01.643 ] 00:04:01.643 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:01.643 09:36:24 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.643 09:36:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.643 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:01.643 09:36:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2722422 00:04:01.643 09:36:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2722422 ']' 00:04:01.643 09:36:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2722422 00:04:01.643 09:36:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:01.643 09:36:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.643 09:36:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2722422 00:04:01.903 09:36:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.903 09:36:24 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.903 09:36:24 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2722422' 00:04:01.903 killing process with pid 2722422 00:04:01.903 09:36:24 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2722422 00:04:01.903 09:36:24 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2722422 00:04:02.162 00:04:02.163 real 0m1.160s 00:04:02.163 user 0m1.964s 00:04:02.163 sys 0m0.443s 00:04:02.163 09:36:25 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.163 09:36:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.163 ************************************ 00:04:02.163 END TEST spdkcli_tcp 00:04:02.163 ************************************ 00:04:02.163 09:36:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.163 09:36:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.163 09:36:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.163 09:36:25 -- common/autotest_common.sh@10 -- # set +x 00:04:02.163 ************************************ 00:04:02.163 START TEST dpdk_mem_utility 00:04:02.163 ************************************ 00:04:02.163 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.163 * Looking for test storage... 00:04:02.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:02.163 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:02.163 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # lcov --version 00:04:02.163 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:02.422 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.422 09:36:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:02.422 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.422 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:02.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.422 --rc genhtml_branch_coverage=1 00:04:02.422 --rc genhtml_function_coverage=1 00:04:02.422 --rc genhtml_legend=1 00:04:02.422 --rc geninfo_all_blocks=1 00:04:02.422 --rc geninfo_unexecuted_blocks=1 00:04:02.422 00:04:02.422 ' 00:04:02.422 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:02.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.422 --rc genhtml_branch_coverage=1 00:04:02.422 --rc genhtml_function_coverage=1 00:04:02.422 --rc genhtml_legend=1 00:04:02.422 --rc geninfo_all_blocks=1 00:04:02.422 --rc geninfo_unexecuted_blocks=1 00:04:02.422 00:04:02.422 ' 00:04:02.422 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:02.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.422 --rc genhtml_branch_coverage=1 00:04:02.422 --rc genhtml_function_coverage=1 00:04:02.422 --rc genhtml_legend=1 00:04:02.422 --rc geninfo_all_blocks=1 00:04:02.422 --rc geninfo_unexecuted_blocks=1 00:04:02.422 00:04:02.422 ' 00:04:02.422 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:02.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.422 --rc genhtml_branch_coverage=1 00:04:02.422 --rc genhtml_function_coverage=1 00:04:02.422 --rc genhtml_legend=1 00:04:02.422 --rc geninfo_all_blocks=1 00:04:02.422 --rc geninfo_unexecuted_blocks=1 00:04:02.423 00:04:02.423 ' 00:04:02.423 09:36:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:02.423 09:36:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2722729 00:04:02.423 09:36:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2722729 00:04:02.423 09:36:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.423 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2722729 ']' 00:04:02.423 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.423 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.423 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.423 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.423 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.423 [2024-11-20 09:36:25.591559] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:02.423 [2024-11-20 09:36:25.591604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722729 ] 00:04:02.423 [2024-11-20 09:36:25.665132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.423 [2024-11-20 09:36:25.705825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.681 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.681 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:02.681 09:36:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:02.681 09:36:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:02.681 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.681 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.682 { 00:04:02.682 "filename": "/tmp/spdk_mem_dump.txt" 00:04:02.682 } 00:04:02.682 09:36:25 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.682 09:36:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:02.682 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:02.682 1 heaps totaling size 810.000000 MiB 00:04:02.682 size: 810.000000 MiB heap id: 0 00:04:02.682 end heaps---------- 00:04:02.682 9 mempools totaling size 595.772034 MiB 00:04:02.682 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:02.682 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:02.682 size: 92.545471 MiB name: bdev_io_2722729 00:04:02.682 size: 50.003479 MiB name: msgpool_2722729 00:04:02.682 size: 36.509338 MiB name: fsdev_io_2722729 00:04:02.682 size: 21.763794 MiB name: PDU_Pool 00:04:02.682 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:02.682 size: 4.133484 MiB name: evtpool_2722729 00:04:02.682 size: 0.026123 MiB name: Session_Pool 00:04:02.682 end mempools------- 00:04:02.682 6 memzones totaling size 4.142822 MiB 00:04:02.682 size: 1.000366 MiB name: RG_ring_0_2722729 00:04:02.682 size: 1.000366 MiB name: RG_ring_1_2722729 00:04:02.682 size: 1.000366 MiB name: RG_ring_4_2722729 00:04:02.682 size: 1.000366 MiB name: RG_ring_5_2722729 00:04:02.682 size: 0.125366 MiB name: RG_ring_2_2722729 00:04:02.682 size: 0.015991 MiB name: RG_ring_3_2722729 00:04:02.682 end memzones------- 00:04:02.682 09:36:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:02.941 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:02.941 list of free elements. size: 10.862488 MiB 00:04:02.941 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:02.941 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:02.941 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:02.941 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:02.941 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:02.941 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:02.941 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:02.941 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:02.941 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:02.941 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:02.941 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:02.941 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:02.941 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:02.941 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:02.941 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:02.941 list of standard malloc elements. size: 199.218628 MiB 00:04:02.941 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:02.941 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:02.941 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:02.941 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:02.941 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:02.941 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:02.941 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:02.941 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:02.941 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:02.941 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:02.941 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:02.941 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:02.941 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:02.941 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:02.941 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:02.941 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:02.941 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:02.941 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:02.941 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:02.942 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:02.942 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:02.942 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:02.942 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:02.942 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:02.942 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:02.942 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:02.942 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:02.942 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:02.942 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:02.942 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:02.942 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:02.942 list of memzone associated elements. size: 599.918884 MiB 00:04:02.942 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:02.942 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:02.942 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:02.942 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:02.942 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:02.942 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2722729_0 00:04:02.942 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:02.942 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2722729_0 00:04:02.942 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:02.942 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2722729_0 00:04:02.942 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:02.942 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:02.942 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:02.942 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:02.942 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:02.942 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2722729_0 00:04:02.942 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:02.942 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2722729 00:04:02.942 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:02.942 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2722729 00:04:02.942 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:02.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:02.942 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:02.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:02.942 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:02.942 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:02.942 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:02.942 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:02.942 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:02.942 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2722729 00:04:02.942 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:02.942 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2722729 00:04:02.942 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:02.942 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2722729 00:04:02.942 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:02.942 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2722729 00:04:02.942 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:02.942 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2722729 00:04:02.942 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:02.942 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2722729 00:04:02.942 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:02.942 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:02.942 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:02.942 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:02.942 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:02.942 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:02.942 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:02.942 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2722729 00:04:02.942 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:02.942 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2722729 00:04:02.942 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:02.942 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:02.942 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:02.942 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:02.942 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:02.942 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2722729 00:04:02.942 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:02.942 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:02.942 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:02.942 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2722729 00:04:02.942 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:02.942 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2722729 00:04:02.942 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:02.942 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2722729 00:04:02.942 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:02.942 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:02.942 09:36:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:02.942 09:36:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2722729 00:04:02.942 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2722729 ']' 00:04:02.942 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2722729 00:04:02.942 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:02.942 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.942 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2722729 00:04:02.942 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.942 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.942 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2722729' 00:04:02.943 killing process with pid 2722729 00:04:02.943 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2722729 00:04:02.943 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2722729 00:04:03.202 00:04:03.202 real 0m1.035s 00:04:03.202 user 0m0.959s 00:04:03.202 sys 0m0.427s 00:04:03.202 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.202 09:36:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.202 ************************************ 00:04:03.202 END TEST dpdk_mem_utility 00:04:03.202 ************************************ 00:04:03.202 09:36:26 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.202 09:36:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.202 09:36:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.202 09:36:26 -- common/autotest_common.sh@10 -- # set +x 00:04:03.202 ************************************ 00:04:03.202 START TEST event 00:04:03.202 ************************************ 00:04:03.202 09:36:26 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.461 * Looking for test storage... 00:04:03.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:03.461 09:36:26 event -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:03.461 09:36:26 event -- common/autotest_common.sh@1703 -- # lcov --version 00:04:03.461 09:36:26 event -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:03.461 09:36:26 event -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:03.461 09:36:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.461 09:36:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.461 09:36:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.461 09:36:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.461 09:36:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.461 09:36:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.461 09:36:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.461 09:36:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.461 09:36:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.461 09:36:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.461 09:36:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.461 09:36:26 event -- scripts/common.sh@344 -- # case "$op" in 00:04:03.461 09:36:26 event -- scripts/common.sh@345 -- # : 1 00:04:03.461 09:36:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.461 09:36:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.461 09:36:26 event -- scripts/common.sh@365 -- # decimal 1 00:04:03.461 09:36:26 event -- scripts/common.sh@353 -- # local d=1 00:04:03.461 09:36:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.461 09:36:26 event -- scripts/common.sh@355 -- # echo 1 00:04:03.461 09:36:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.461 09:36:26 event -- scripts/common.sh@366 -- # decimal 2 00:04:03.461 09:36:26 event -- scripts/common.sh@353 -- # local d=2 00:04:03.461 09:36:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.461 09:36:26 event -- scripts/common.sh@355 -- # echo 2 00:04:03.461 09:36:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.461 09:36:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.461 09:36:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.461 09:36:26 event -- scripts/common.sh@368 -- # return 0 00:04:03.461 09:36:26 event -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.461 09:36:26 event -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.461 --rc genhtml_branch_coverage=1 00:04:03.462 --rc genhtml_function_coverage=1 00:04:03.462 --rc genhtml_legend=1 00:04:03.462 --rc geninfo_all_blocks=1 00:04:03.462 --rc geninfo_unexecuted_blocks=1 00:04:03.462 00:04:03.462 ' 00:04:03.462 09:36:26 event -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:03.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.462 --rc genhtml_branch_coverage=1 00:04:03.462 --rc genhtml_function_coverage=1 00:04:03.462 --rc genhtml_legend=1 00:04:03.462 --rc geninfo_all_blocks=1 00:04:03.462 --rc geninfo_unexecuted_blocks=1 00:04:03.462 00:04:03.462 ' 00:04:03.462 09:36:26 event -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:03.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.462 --rc genhtml_branch_coverage=1 00:04:03.462 --rc genhtml_function_coverage=1 00:04:03.462 --rc genhtml_legend=1 00:04:03.462 --rc geninfo_all_blocks=1 00:04:03.462 --rc geninfo_unexecuted_blocks=1 00:04:03.462 00:04:03.462 ' 00:04:03.462 09:36:26 event -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:03.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.462 --rc genhtml_branch_coverage=1 00:04:03.462 --rc genhtml_function_coverage=1 00:04:03.462 --rc genhtml_legend=1 00:04:03.462 --rc geninfo_all_blocks=1 00:04:03.462 --rc geninfo_unexecuted_blocks=1 00:04:03.462 00:04:03.462 ' 00:04:03.462 09:36:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:03.462 09:36:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:03.462 09:36:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.462 09:36:26 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:03.462 09:36:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.462 09:36:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.462 ************************************ 00:04:03.462 START TEST event_perf 00:04:03.462 ************************************ 00:04:03.462 09:36:26 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.462 Running I/O for 1 seconds...[2024-11-20 09:36:26.702134] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:03.462 [2024-11-20 09:36:26.702200] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723019 ] 00:04:03.462 [2024-11-20 09:36:26.783547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:03.720 [2024-11-20 09:36:26.828449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.720 [2024-11-20 09:36:26.828561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:03.720 [2024-11-20 09:36:26.828665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.720 [2024-11-20 09:36:26.828666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:04.655 Running I/O for 1 seconds... 00:04:04.655 lcore 0: 200751 00:04:04.656 lcore 1: 200748 00:04:04.656 lcore 2: 200748 00:04:04.656 lcore 3: 200750 00:04:04.656 done. 00:04:04.656 00:04:04.656 real 0m1.188s 00:04:04.656 user 0m4.106s 00:04:04.656 sys 0m0.079s 00:04:04.656 09:36:27 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.656 09:36:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:04.656 ************************************ 00:04:04.656 END TEST event_perf 00:04:04.656 ************************************ 00:04:04.656 09:36:27 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:04.656 09:36:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:04.656 09:36:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.656 09:36:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:04.656 ************************************ 00:04:04.656 START TEST event_reactor 00:04:04.656 ************************************ 00:04:04.656 09:36:27 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:04.656 [2024-11-20 09:36:27.960685] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:04.656 [2024-11-20 09:36:27.960754] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723250 ] 00:04:04.914 [2024-11-20 09:36:28.040867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.914 [2024-11-20 09:36:28.082651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.851 test_start 00:04:05.851 oneshot 00:04:05.851 tick 100 00:04:05.851 tick 100 00:04:05.851 tick 250 00:04:05.851 tick 100 00:04:05.851 tick 100 00:04:05.851 tick 250 00:04:05.851 tick 100 00:04:05.851 tick 500 00:04:05.851 tick 100 00:04:05.851 tick 100 00:04:05.851 tick 250 00:04:05.851 tick 100 00:04:05.851 tick 100 00:04:05.851 test_end 00:04:05.851 00:04:05.851 real 0m1.180s 00:04:05.851 user 0m1.104s 00:04:05.851 sys 0m0.072s 00:04:05.851 09:36:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.851 09:36:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:05.851 ************************************ 00:04:05.852 END TEST event_reactor 00:04:05.852 ************************************ 00:04:05.852 09:36:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:05.852 09:36:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:05.852 09:36:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.852 09:36:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.111 ************************************ 00:04:06.111 START TEST event_reactor_perf 00:04:06.111 ************************************ 00:04:06.111 09:36:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.111 [2024-11-20 09:36:29.213851] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:06.111 [2024-11-20 09:36:29.213920] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723420 ] 00:04:06.111 [2024-11-20 09:36:29.294547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.111 [2024-11-20 09:36:29.337487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.047 test_start 00:04:07.047 test_end 00:04:07.047 Performance: 499557 events per second 00:04:07.307 00:04:07.307 real 0m1.187s 00:04:07.307 user 0m1.104s 00:04:07.307 sys 0m0.079s 00:04:07.307 09:36:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.307 09:36:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.307 ************************************ 00:04:07.307 END TEST event_reactor_perf 00:04:07.307 ************************************ 00:04:07.307 09:36:30 event -- event/event.sh@49 -- # uname -s 00:04:07.307 09:36:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:07.307 09:36:30 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.307 09:36:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.307 09:36:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.307 09:36:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.307 ************************************ 00:04:07.307 START TEST event_scheduler 00:04:07.307 ************************************ 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.307 * Looking for test storage... 00:04:07.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1703 -- # lcov --version 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.307 09:36:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:07.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.307 --rc genhtml_branch_coverage=1 00:04:07.307 --rc genhtml_function_coverage=1 00:04:07.307 --rc genhtml_legend=1 00:04:07.307 --rc geninfo_all_blocks=1 00:04:07.307 --rc geninfo_unexecuted_blocks=1 00:04:07.307 00:04:07.307 ' 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:07.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.307 --rc genhtml_branch_coverage=1 00:04:07.307 --rc genhtml_function_coverage=1 00:04:07.307 --rc genhtml_legend=1 00:04:07.307 --rc geninfo_all_blocks=1 00:04:07.307 --rc geninfo_unexecuted_blocks=1 00:04:07.307 00:04:07.307 ' 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:07.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.307 --rc genhtml_branch_coverage=1 00:04:07.307 --rc genhtml_function_coverage=1 00:04:07.307 --rc genhtml_legend=1 00:04:07.307 --rc geninfo_all_blocks=1 00:04:07.307 --rc geninfo_unexecuted_blocks=1 00:04:07.307 00:04:07.307 ' 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:07.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.307 --rc genhtml_branch_coverage=1 00:04:07.307 --rc genhtml_function_coverage=1 00:04:07.307 --rc genhtml_legend=1 00:04:07.307 --rc geninfo_all_blocks=1 00:04:07.307 --rc geninfo_unexecuted_blocks=1 00:04:07.307 00:04:07.307 ' 00:04:07.307 09:36:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:07.307 09:36:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2723735 00:04:07.307 09:36:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:07.307 09:36:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.307 09:36:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2723735 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2723735 ']' 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.307 09:36:30 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.567 09:36:30 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.567 09:36:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.567 [2024-11-20 09:36:30.677624] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:07.567 [2024-11-20 09:36:30.677674] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2723735 ] 00:04:07.567 [2024-11-20 09:36:30.755251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:07.567 [2024-11-20 09:36:30.801730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.567 [2024-11-20 09:36:30.801839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.567 [2024-11-20 09:36:30.801946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:07.567 [2024-11-20 09:36:30.801965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:07.567 09:36:30 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.567 09:36:30 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:07.567 09:36:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:07.567 09:36:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.567 09:36:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.567 [2024-11-20 09:36:30.850548] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:07.567 [2024-11-20 09:36:30.850566] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:07.567 [2024-11-20 09:36:30.850577] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:07.567 [2024-11-20 09:36:30.850583] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:07.567 [2024-11-20 09:36:30.850589] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:07.567 09:36:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.567 09:36:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:07.567 09:36:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.567 09:36:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 [2024-11-20 09:36:30.924608] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:07.827 09:36:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:07.827 09:36:30 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.827 09:36:30 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.827 09:36:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 ************************************ 00:04:07.827 START TEST scheduler_create_thread 00:04:07.827 ************************************ 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 2 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 3 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 4 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 5 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 6 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 7 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 8 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 9 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 10 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.827 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.764 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.764 09:36:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:08.764 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.764 09:36:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.142 09:36:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.142 09:36:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:10.142 09:36:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:10.142 09:36:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.142 09:36:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.077 09:36:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.077 00:04:11.077 real 0m3.379s 00:04:11.077 user 0m0.025s 00:04:11.077 sys 0m0.004s 00:04:11.077 09:36:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.077 09:36:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.077 ************************************ 00:04:11.077 END TEST scheduler_create_thread 00:04:11.077 ************************************ 00:04:11.077 09:36:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:11.077 09:36:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2723735 00:04:11.077 09:36:34 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2723735 ']' 00:04:11.077 09:36:34 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2723735 00:04:11.077 09:36:34 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:11.077 09:36:34 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.077 09:36:34 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2723735 00:04:11.336 09:36:34 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:11.336 09:36:34 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:11.336 09:36:34 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2723735' 00:04:11.336 killing process with pid 2723735 00:04:11.336 09:36:34 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2723735 00:04:11.336 09:36:34 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2723735 00:04:11.596 [2024-11-20 09:36:34.716542] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:11.596 00:04:11.596 real 0m4.468s 00:04:11.596 user 0m7.817s 00:04:11.596 sys 0m0.376s 00:04:11.596 09:36:34 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.596 09:36:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:11.596 ************************************ 00:04:11.596 END TEST event_scheduler 00:04:11.596 ************************************ 00:04:11.855 09:36:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:11.855 09:36:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:11.855 09:36:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.855 09:36:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.855 09:36:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.855 ************************************ 00:04:11.855 START TEST app_repeat 00:04:11.855 ************************************ 00:04:11.855 09:36:35 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2724557 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2724557' 00:04:11.855 Process app_repeat pid: 2724557 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:11.855 spdk_app_start Round 0 00:04:11.855 09:36:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2724557 /var/tmp/spdk-nbd.sock 00:04:11.855 09:36:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2724557 ']' 00:04:11.855 09:36:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:11.855 09:36:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.855 09:36:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:11.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:11.855 09:36:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.855 09:36:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:11.855 [2024-11-20 09:36:35.038647] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:11.855 [2024-11-20 09:36:35.038698] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724557 ] 00:04:11.855 [2024-11-20 09:36:35.114191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:11.855 [2024-11-20 09:36:35.158860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.855 [2024-11-20 09:36:35.158858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.114 09:36:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.114 09:36:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:12.115 09:36:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.115 Malloc0 00:04:12.374 09:36:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.374 Malloc1 00:04:12.374 09:36:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.374 09:36:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:12.633 /dev/nbd0 00:04:12.633 09:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:12.633 09:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.633 1+0 records in 00:04:12.633 1+0 records out 00:04:12.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192658 s, 21.3 MB/s 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.633 09:36:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.633 09:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.633 09:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.633 09:36:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:12.893 /dev/nbd1 00:04:12.893 09:36:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:12.893 09:36:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.893 1+0 records in 00:04:12.893 1+0 records out 00:04:12.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253412 s, 16.2 MB/s 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.893 09:36:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.893 09:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.893 09:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.893 09:36:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.893 09:36:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.893 09:36:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.153 { 00:04:13.153 "nbd_device": "/dev/nbd0", 00:04:13.153 "bdev_name": "Malloc0" 00:04:13.153 }, 00:04:13.153 { 00:04:13.153 "nbd_device": "/dev/nbd1", 00:04:13.153 "bdev_name": "Malloc1" 00:04:13.153 } 00:04:13.153 ]' 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.153 { 00:04:13.153 "nbd_device": "/dev/nbd0", 00:04:13.153 "bdev_name": "Malloc0" 00:04:13.153 }, 00:04:13.153 { 00:04:13.153 "nbd_device": "/dev/nbd1", 00:04:13.153 "bdev_name": "Malloc1" 00:04:13.153 } 00:04:13.153 ]' 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.153 /dev/nbd1' 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.153 /dev/nbd1' 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.153 256+0 records in 00:04:13.153 256+0 records out 00:04:13.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107208 s, 97.8 MB/s 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.153 256+0 records in 00:04:13.153 256+0 records out 00:04:13.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138257 s, 75.8 MB/s 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.153 256+0 records in 00:04:13.153 256+0 records out 00:04:13.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147821 s, 70.9 MB/s 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.153 09:36:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.154 09:36:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.413 09:36:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.672 09:36:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:13.931 09:36:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:13.931 09:36:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.190 09:36:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:14.449 [2024-11-20 09:36:37.522089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.449 [2024-11-20 09:36:37.560285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.449 [2024-11-20 09:36:37.560286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.449 [2024-11-20 09:36:37.601327] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:14.449 [2024-11-20 09:36:37.601366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:17.738 09:36:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.738 09:36:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:17.738 spdk_app_start Round 1 00:04:17.738 09:36:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2724557 /var/tmp/spdk-nbd.sock 00:04:17.738 09:36:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2724557 ']' 00:04:17.738 09:36:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.738 09:36:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.738 09:36:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.738 09:36:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.738 09:36:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.738 09:36:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.738 09:36:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:17.738 09:36:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.738 Malloc0 00:04:17.738 09:36:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.738 Malloc1 00:04:17.738 09:36:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.738 09:36:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:17.997 /dev/nbd0 00:04:17.997 09:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:17.997 09:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:17.997 1+0 records in 00:04:17.997 1+0 records out 00:04:17.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191421 s, 21.4 MB/s 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:17.997 09:36:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:17.997 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:17.997 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.997 09:36:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.256 /dev/nbd1 00:04:18.256 09:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:18.256 09:36:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.256 1+0 records in 00:04:18.256 1+0 records out 00:04:18.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206642 s, 19.8 MB/s 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.256 09:36:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.256 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.256 09:36:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.256 09:36:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.256 09:36:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.256 09:36:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:18.516 { 00:04:18.516 "nbd_device": "/dev/nbd0", 00:04:18.516 "bdev_name": "Malloc0" 00:04:18.516 }, 00:04:18.516 { 00:04:18.516 "nbd_device": "/dev/nbd1", 00:04:18.516 "bdev_name": "Malloc1" 00:04:18.516 } 00:04:18.516 ]' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:18.516 { 00:04:18.516 "nbd_device": "/dev/nbd0", 00:04:18.516 "bdev_name": "Malloc0" 00:04:18.516 }, 00:04:18.516 { 00:04:18.516 "nbd_device": "/dev/nbd1", 00:04:18.516 "bdev_name": "Malloc1" 00:04:18.516 } 00:04:18.516 ]' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:18.516 /dev/nbd1' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:18.516 /dev/nbd1' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:18.516 256+0 records in 00:04:18.516 256+0 records out 00:04:18.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106349 s, 98.6 MB/s 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:18.516 256+0 records in 00:04:18.516 256+0 records out 00:04:18.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142926 s, 73.4 MB/s 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:18.516 256+0 records in 00:04:18.516 256+0 records out 00:04:18.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148452 s, 70.6 MB/s 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.516 09:36:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.775 09:36:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.033 09:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:19.292 09:36:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:19.292 09:36:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:19.552 09:36:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:19.552 [2024-11-20 09:36:42.871049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.811 [2024-11-20 09:36:42.910478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.811 [2024-11-20 09:36:42.910479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.811 [2024-11-20 09:36:42.952345] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:19.811 [2024-11-20 09:36:42.952387] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.100 09:36:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:23.100 09:36:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:23.100 spdk_app_start Round 2 00:04:23.100 09:36:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2724557 /var/tmp/spdk-nbd.sock 00:04:23.100 09:36:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2724557 ']' 00:04:23.100 09:36:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.100 09:36:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.100 09:36:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.100 09:36:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.101 09:36:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:23.101 09:36:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.101 09:36:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:23.101 09:36:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.101 Malloc0 00:04:23.101 09:36:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.101 Malloc1 00:04:23.101 09:36:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.101 09:36:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.360 /dev/nbd0 00:04:23.360 09:36:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.360 09:36:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.360 1+0 records in 00:04:23.360 1+0 records out 00:04:23.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233013 s, 17.6 MB/s 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.360 09:36:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.360 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.360 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.360 09:36:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.619 /dev/nbd1 00:04:23.619 09:36:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.619 09:36:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.619 1+0 records in 00:04:23.619 1+0 records out 00:04:23.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229845 s, 17.8 MB/s 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.619 09:36:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.619 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.619 09:36:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.619 09:36:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.619 09:36:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.619 09:36:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.879 { 00:04:23.879 "nbd_device": "/dev/nbd0", 00:04:23.879 "bdev_name": "Malloc0" 00:04:23.879 }, 00:04:23.879 { 00:04:23.879 "nbd_device": "/dev/nbd1", 00:04:23.879 "bdev_name": "Malloc1" 00:04:23.879 } 00:04:23.879 ]' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.879 { 00:04:23.879 "nbd_device": "/dev/nbd0", 00:04:23.879 "bdev_name": "Malloc0" 00:04:23.879 }, 00:04:23.879 { 00:04:23.879 "nbd_device": "/dev/nbd1", 00:04:23.879 "bdev_name": "Malloc1" 00:04:23.879 } 00:04:23.879 ]' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.879 /dev/nbd1' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.879 /dev/nbd1' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.879 256+0 records in 00:04:23.879 256+0 records out 00:04:23.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445237 s, 236 MB/s 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.879 256+0 records in 00:04:23.879 256+0 records out 00:04:23.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134048 s, 78.2 MB/s 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.879 256+0 records in 00:04:23.879 256+0 records out 00:04:23.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144614 s, 72.5 MB/s 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.879 09:36:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.138 09:36:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.397 09:36:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.656 09:36:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.656 09:36:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.915 09:36:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:24.915 [2024-11-20 09:36:48.237084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.174 [2024-11-20 09:36:48.276008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.174 [2024-11-20 09:36:48.276008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.174 [2024-11-20 09:36:48.317299] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.174 [2024-11-20 09:36:48.317340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.463 09:36:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2724557 /var/tmp/spdk-nbd.sock 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2724557 ']' 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:28.463 09:36:51 event.app_repeat -- event/event.sh@39 -- # killprocess 2724557 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2724557 ']' 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2724557 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2724557 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2724557' 00:04:28.463 killing process with pid 2724557 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2724557 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2724557 00:04:28.463 spdk_app_start is called in Round 0. 00:04:28.463 Shutdown signal received, stop current app iteration 00:04:28.463 Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 reinitialization... 00:04:28.463 spdk_app_start is called in Round 1. 00:04:28.463 Shutdown signal received, stop current app iteration 00:04:28.463 Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 reinitialization... 00:04:28.463 spdk_app_start is called in Round 2. 00:04:28.463 Shutdown signal received, stop current app iteration 00:04:28.463 Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 reinitialization... 00:04:28.463 spdk_app_start is called in Round 3. 00:04:28.463 Shutdown signal received, stop current app iteration 00:04:28.463 09:36:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:28.463 09:36:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:28.463 00:04:28.463 real 0m16.494s 00:04:28.463 user 0m36.200s 00:04:28.463 sys 0m2.661s 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.463 09:36:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 ************************************ 00:04:28.463 END TEST app_repeat 00:04:28.463 ************************************ 00:04:28.463 09:36:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:28.463 09:36:51 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.463 09:36:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.463 09:36:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.463 09:36:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 ************************************ 00:04:28.463 START TEST cpu_locks 00:04:28.463 ************************************ 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.463 * Looking for test storage... 00:04:28.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1703 -- # lcov --version 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.463 09:36:51 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:28.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.463 --rc genhtml_branch_coverage=1 00:04:28.463 --rc genhtml_function_coverage=1 00:04:28.463 --rc genhtml_legend=1 00:04:28.463 --rc geninfo_all_blocks=1 00:04:28.463 --rc geninfo_unexecuted_blocks=1 00:04:28.463 00:04:28.463 ' 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:28.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.463 --rc genhtml_branch_coverage=1 00:04:28.463 --rc genhtml_function_coverage=1 00:04:28.463 --rc genhtml_legend=1 00:04:28.463 --rc geninfo_all_blocks=1 00:04:28.463 --rc geninfo_unexecuted_blocks=1 00:04:28.463 00:04:28.463 ' 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:28.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.463 --rc genhtml_branch_coverage=1 00:04:28.463 --rc genhtml_function_coverage=1 00:04:28.463 --rc genhtml_legend=1 00:04:28.463 --rc geninfo_all_blocks=1 00:04:28.463 --rc geninfo_unexecuted_blocks=1 00:04:28.463 00:04:28.463 ' 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:28.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.463 --rc genhtml_branch_coverage=1 00:04:28.463 --rc genhtml_function_coverage=1 00:04:28.463 --rc genhtml_legend=1 00:04:28.463 --rc geninfo_all_blocks=1 00:04:28.463 --rc geninfo_unexecuted_blocks=1 00:04:28.463 00:04:28.463 ' 00:04:28.463 09:36:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:28.463 09:36:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:28.463 09:36:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:28.463 09:36:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.463 09:36:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 ************************************ 00:04:28.463 START TEST default_locks 00:04:28.463 ************************************ 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2727553 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2727553 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2727553 ']' 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.463 09:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.723 [2024-11-20 09:36:51.819777] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:28.723 [2024-11-20 09:36:51.819820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727553 ] 00:04:28.723 [2024-11-20 09:36:51.895370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.723 [2024-11-20 09:36:51.935917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.982 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.982 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:28.982 09:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2727553 00:04:28.982 09:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2727553 00:04:28.982 09:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.549 lslocks: write error 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2727553 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2727553 ']' 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2727553 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2727553 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2727553' 00:04:29.550 killing process with pid 2727553 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2727553 00:04:29.550 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2727553 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2727553 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2727553 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2727553 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2727553 ']' 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2727553) - No such process 00:04:29.808 ERROR: process (pid: 2727553) is no longer running 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:29.808 00:04:29.808 real 0m1.191s 00:04:29.808 user 0m1.147s 00:04:29.808 sys 0m0.545s 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.808 09:36:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.808 ************************************ 00:04:29.808 END TEST default_locks 00:04:29.808 ************************************ 00:04:29.808 09:36:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:29.808 09:36:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.808 09:36:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.808 09:36:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.809 ************************************ 00:04:29.809 START TEST default_locks_via_rpc 00:04:29.809 ************************************ 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2727811 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2727811 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2727811 ']' 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.809 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.809 [2024-11-20 09:36:53.084996] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:29.809 [2024-11-20 09:36:53.085043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727811 ] 00:04:30.067 [2024-11-20 09:36:53.160731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.067 [2024-11-20 09:36:53.199148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2727811 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2727811 00:04:30.326 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2727811 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2727811 ']' 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2727811 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2727811 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2727811' 00:04:30.585 killing process with pid 2727811 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2727811 00:04:30.585 09:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2727811 00:04:30.845 00:04:30.845 real 0m1.008s 00:04:30.845 user 0m0.974s 00:04:30.845 sys 0m0.453s 00:04:30.845 09:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.845 09:36:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.845 ************************************ 00:04:30.845 END TEST default_locks_via_rpc 00:04:30.845 ************************************ 00:04:30.845 09:36:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:30.845 09:36:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.845 09:36:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.845 09:36:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.845 ************************************ 00:04:30.845 START TEST non_locking_app_on_locked_coremask 00:04:30.845 ************************************ 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2728017 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2728017 /var/tmp/spdk.sock 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2728017 ']' 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.845 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.845 [2024-11-20 09:36:54.163299] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:30.845 [2024-11-20 09:36:54.163341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728017 ] 00:04:31.104 [2024-11-20 09:36:54.237999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.104 [2024-11-20 09:36:54.280706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2728074 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2728074 /var/tmp/spdk2.sock 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2728074 ']' 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.363 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.364 09:36:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.364 [2024-11-20 09:36:54.539892] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:31.364 [2024-11-20 09:36:54.539940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728074 ] 00:04:31.364 [2024-11-20 09:36:54.624727] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:31.364 [2024-11-20 09:36:54.624750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.622 [2024-11-20 09:36:54.710264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.190 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.190 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:32.190 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2728017 00:04:32.190 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2728017 00:04:32.190 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.448 lslocks: write error 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2728017 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2728017 ']' 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2728017 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728017 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728017' 00:04:32.448 killing process with pid 2728017 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2728017 00:04:32.448 09:36:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2728017 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2728074 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2728074 ']' 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2728074 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728074 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728074' 00:04:33.015 killing process with pid 2728074 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2728074 00:04:33.015 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2728074 00:04:33.582 00:04:33.582 real 0m2.536s 00:04:33.583 user 0m2.690s 00:04:33.583 sys 0m0.803s 00:04:33.583 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.583 09:36:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.583 ************************************ 00:04:33.583 END TEST non_locking_app_on_locked_coremask 00:04:33.583 ************************************ 00:04:33.583 09:36:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:33.583 09:36:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.583 09:36:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.583 09:36:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:33.583 ************************************ 00:04:33.583 START TEST locking_app_on_unlocked_coremask 00:04:33.583 ************************************ 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2728434 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2728434 /var/tmp/spdk.sock 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2728434 ']' 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.583 09:36:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.583 [2024-11-20 09:36:56.767888] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:33.583 [2024-11-20 09:36:56.767933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728434 ] 00:04:33.583 [2024-11-20 09:36:56.843677] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:33.583 [2024-11-20 09:36:56.843702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.583 [2024-11-20 09:36:56.884211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2728588 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2728588 /var/tmp/spdk2.sock 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2728588 ']' 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:33.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.842 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.842 [2024-11-20 09:36:57.159326] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:33.842 [2024-11-20 09:36:57.159374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728588 ] 00:04:34.191 [2024-11-20 09:36:57.251100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.191 [2024-11-20 09:36:57.332267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.781 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.781 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:34.781 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2728588 00:04:34.781 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.781 09:36:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2728588 00:04:35.046 lslocks: write error 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2728434 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2728434 ']' 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2728434 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728434 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728434' 00:04:35.046 killing process with pid 2728434 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2728434 00:04:35.046 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2728434 00:04:35.994 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2728588 00:04:35.994 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2728588 ']' 00:04:35.994 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2728588 00:04:35.994 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:35.994 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.994 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728588 00:04:35.994 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.996 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.996 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728588' 00:04:35.996 killing process with pid 2728588 00:04:35.996 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2728588 00:04:35.996 09:36:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2728588 00:04:35.996 00:04:35.996 real 0m2.583s 00:04:35.996 user 0m2.698s 00:04:35.996 sys 0m0.861s 00:04:35.996 09:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.996 09:36:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.996 ************************************ 00:04:35.996 END TEST locking_app_on_unlocked_coremask 00:04:35.996 ************************************ 00:04:36.261 09:36:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:36.261 09:36:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.261 09:36:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.261 09:36:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.261 ************************************ 00:04:36.261 START TEST locking_app_on_locked_coremask 00:04:36.261 ************************************ 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2728877 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2728877 /var/tmp/spdk.sock 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2728877 ']' 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.261 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.261 [2024-11-20 09:36:59.422348] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:36.261 [2024-11-20 09:36:59.422390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728877 ] 00:04:36.261 [2024-11-20 09:36:59.494898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.261 [2024-11-20 09:36:59.537627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2729091 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2729091 /var/tmp/spdk2.sock 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2729091 /var/tmp/spdk2.sock 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2729091 /var/tmp/spdk2.sock 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2729091 ']' 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:36.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.520 09:36:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.520 [2024-11-20 09:36:59.806954] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:36.520 [2024-11-20 09:36:59.807001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729091 ] 00:04:36.779 [2024-11-20 09:36:59.890109] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2728877 has claimed it. 00:04:36.779 [2024-11-20 09:36:59.890140] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:37.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2729091) - No such process 00:04:37.347 ERROR: process (pid: 2729091) is no longer running 00:04:37.347 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.347 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:37.347 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:37.347 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.347 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.347 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.347 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2728877 00:04:37.347 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2728877 00:04:37.347 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.607 lslocks: write error 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2728877 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2728877 ']' 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2728877 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728877 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728877' 00:04:37.607 killing process with pid 2728877 00:04:37.607 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2728877 00:04:37.608 09:37:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2728877 00:04:38.177 00:04:38.177 real 0m1.839s 00:04:38.177 user 0m1.994s 00:04:38.177 sys 0m0.586s 00:04:38.177 09:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.177 09:37:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.177 ************************************ 00:04:38.177 END TEST locking_app_on_locked_coremask 00:04:38.177 ************************************ 00:04:38.177 09:37:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:38.177 09:37:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.177 09:37:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.177 09:37:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.177 ************************************ 00:04:38.177 START TEST locking_overlapped_coremask 00:04:38.177 ************************************ 00:04:38.177 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:38.177 09:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2729352 00:04:38.178 09:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2729352 /var/tmp/spdk.sock 00:04:38.178 09:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:38.178 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2729352 ']' 00:04:38.178 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.178 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.178 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.178 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.178 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.178 [2024-11-20 09:37:01.328266] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:38.178 [2024-11-20 09:37:01.328309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729352 ] 00:04:38.178 [2024-11-20 09:37:01.404868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:38.178 [2024-11-20 09:37:01.450395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.178 [2024-11-20 09:37:01.450500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.178 [2024-11-20 09:37:01.450501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2729367 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2729367 /var/tmp/spdk2.sock 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2729367 /var/tmp/spdk2.sock 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2729367 /var/tmp/spdk2.sock 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2729367 ']' 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.437 09:37:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.437 [2024-11-20 09:37:01.715067] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:38.437 [2024-11-20 09:37:01.715115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729367 ] 00:04:38.696 [2024-11-20 09:37:01.807308] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2729352 has claimed it. 00:04:38.696 [2024-11-20 09:37:01.807346] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:39.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2729367) - No such process 00:04:39.265 ERROR: process (pid: 2729367) is no longer running 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2729352 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2729352 ']' 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2729352 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2729352 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2729352' 00:04:39.265 killing process with pid 2729352 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2729352 00:04:39.265 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2729352 00:04:39.524 00:04:39.524 real 0m1.443s 00:04:39.524 user 0m3.982s 00:04:39.524 sys 0m0.390s 00:04:39.524 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.524 09:37:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.524 ************************************ 00:04:39.524 END TEST locking_overlapped_coremask 00:04:39.524 ************************************ 00:04:39.524 09:37:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:39.524 09:37:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.524 09:37:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.524 09:37:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.524 ************************************ 00:04:39.524 START TEST locking_overlapped_coremask_via_rpc 00:04:39.524 ************************************ 00:04:39.524 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:39.524 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2729619 00:04:39.524 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2729619 /var/tmp/spdk.sock 00:04:39.524 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:39.524 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2729619 ']' 00:04:39.524 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.524 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.525 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.525 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.525 09:37:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.525 [2024-11-20 09:37:02.844730] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:39.525 [2024-11-20 09:37:02.844775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729619 ] 00:04:39.783 [2024-11-20 09:37:02.938964] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.783 [2024-11-20 09:37:02.938988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:39.783 [2024-11-20 09:37:02.985767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.783 [2024-11-20 09:37:02.985803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.783 [2024-11-20 09:37:02.985804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2729740 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2729740 /var/tmp/spdk2.sock 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2729740 ']' 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.722 09:37:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.722 [2024-11-20 09:37:03.748354] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:40.722 [2024-11-20 09:37:03.748403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2729740 ] 00:04:40.722 [2024-11-20 09:37:03.842407] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.722 [2024-11-20 09:37:03.842438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.722 [2024-11-20 09:37:03.937331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.722 [2024-11-20 09:37:03.937447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.722 [2024-11-20 09:37:03.937449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.291 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.291 [2024-11-20 09:37:04.619016] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2729619 has claimed it. 00:04:41.551 request: 00:04:41.551 { 00:04:41.551 "method": "framework_enable_cpumask_locks", 00:04:41.551 "req_id": 1 00:04:41.551 } 00:04:41.551 Got JSON-RPC error response 00:04:41.551 response: 00:04:41.551 { 00:04:41.551 "code": -32603, 00:04:41.551 "message": "Failed to claim CPU core: 2" 00:04:41.551 } 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2729619 /var/tmp/spdk.sock 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2729619 ']' 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2729740 /var/tmp/spdk2.sock 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2729740 ']' 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.551 09:37:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.811 09:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.811 09:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.811 09:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:41.811 09:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:41.811 09:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:41.811 09:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:41.811 00:04:41.811 real 0m2.251s 00:04:41.811 user 0m1.024s 00:04:41.811 sys 0m0.152s 00:04:41.811 09:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.811 09:37:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.811 ************************************ 00:04:41.811 END TEST locking_overlapped_coremask_via_rpc 00:04:41.811 ************************************ 00:04:41.811 09:37:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:41.811 09:37:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2729619 ]] 00:04:41.811 09:37:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2729619 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2729619 ']' 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2729619 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2729619 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2729619' 00:04:41.811 killing process with pid 2729619 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2729619 00:04:41.811 09:37:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2729619 00:04:42.380 09:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2729740 ]] 00:04:42.380 09:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2729740 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2729740 ']' 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2729740 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2729740 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2729740' 00:04:42.380 killing process with pid 2729740 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2729740 00:04:42.380 09:37:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2729740 00:04:42.644 09:37:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:42.644 09:37:05 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:42.644 09:37:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2729619 ]] 00:04:42.644 09:37:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2729619 00:04:42.644 09:37:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2729619 ']' 00:04:42.644 09:37:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2729619 00:04:42.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2729619) - No such process 00:04:42.644 09:37:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2729619 is not found' 00:04:42.644 Process with pid 2729619 is not found 00:04:42.644 09:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2729740 ]] 00:04:42.644 09:37:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2729740 00:04:42.644 09:37:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2729740 ']' 00:04:42.644 09:37:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2729740 00:04:42.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2729740) - No such process 00:04:42.644 09:37:05 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2729740 is not found' 00:04:42.644 Process with pid 2729740 is not found 00:04:42.644 09:37:05 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:42.644 00:04:42.644 real 0m14.231s 00:04:42.644 user 0m25.825s 00:04:42.644 sys 0m4.748s 00:04:42.644 09:37:05 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.644 09:37:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.644 ************************************ 00:04:42.644 END TEST cpu_locks 00:04:42.644 ************************************ 00:04:42.644 00:04:42.644 real 0m39.364s 00:04:42.644 user 1m16.448s 00:04:42.644 sys 0m8.381s 00:04:42.644 09:37:05 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.644 09:37:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.644 ************************************ 00:04:42.644 END TEST event 00:04:42.644 ************************************ 00:04:42.644 09:37:05 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.644 09:37:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.644 09:37:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.644 09:37:05 -- common/autotest_common.sh@10 -- # set +x 00:04:42.644 ************************************ 00:04:42.644 START TEST thread 00:04:42.644 ************************************ 00:04:42.644 09:37:05 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:42.906 * Looking for test storage... 00:04:42.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:42.907 09:37:05 thread -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:42.907 09:37:05 thread -- common/autotest_common.sh@1703 -- # lcov --version 00:04:42.907 09:37:05 thread -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:42.907 09:37:06 thread -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:42.907 09:37:06 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.907 09:37:06 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.907 09:37:06 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.907 09:37:06 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.907 09:37:06 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.907 09:37:06 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.907 09:37:06 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.907 09:37:06 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.907 09:37:06 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.907 09:37:06 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.907 09:37:06 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.907 09:37:06 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:42.907 09:37:06 thread -- scripts/common.sh@345 -- # : 1 00:04:42.907 09:37:06 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.907 09:37:06 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.907 09:37:06 thread -- scripts/common.sh@365 -- # decimal 1 00:04:42.907 09:37:06 thread -- scripts/common.sh@353 -- # local d=1 00:04:42.907 09:37:06 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.907 09:37:06 thread -- scripts/common.sh@355 -- # echo 1 00:04:42.907 09:37:06 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.907 09:37:06 thread -- scripts/common.sh@366 -- # decimal 2 00:04:42.907 09:37:06 thread -- scripts/common.sh@353 -- # local d=2 00:04:42.907 09:37:06 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.907 09:37:06 thread -- scripts/common.sh@355 -- # echo 2 00:04:42.907 09:37:06 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.907 09:37:06 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.907 09:37:06 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.907 09:37:06 thread -- scripts/common.sh@368 -- # return 0 00:04:42.907 09:37:06 thread -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.907 09:37:06 thread -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:42.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.907 --rc genhtml_branch_coverage=1 00:04:42.907 --rc genhtml_function_coverage=1 00:04:42.907 --rc genhtml_legend=1 00:04:42.907 --rc geninfo_all_blocks=1 00:04:42.907 --rc geninfo_unexecuted_blocks=1 00:04:42.907 00:04:42.907 ' 00:04:42.907 09:37:06 thread -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:42.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.907 --rc genhtml_branch_coverage=1 00:04:42.907 --rc genhtml_function_coverage=1 00:04:42.907 --rc genhtml_legend=1 00:04:42.907 --rc geninfo_all_blocks=1 00:04:42.907 --rc geninfo_unexecuted_blocks=1 00:04:42.907 00:04:42.907 ' 00:04:42.907 09:37:06 thread -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:42.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.907 --rc genhtml_branch_coverage=1 00:04:42.907 --rc genhtml_function_coverage=1 00:04:42.907 --rc genhtml_legend=1 00:04:42.907 --rc geninfo_all_blocks=1 00:04:42.907 --rc geninfo_unexecuted_blocks=1 00:04:42.907 00:04:42.907 ' 00:04:42.907 09:37:06 thread -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:42.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.907 --rc genhtml_branch_coverage=1 00:04:42.907 --rc genhtml_function_coverage=1 00:04:42.907 --rc genhtml_legend=1 00:04:42.907 --rc geninfo_all_blocks=1 00:04:42.907 --rc geninfo_unexecuted_blocks=1 00:04:42.907 00:04:42.907 ' 00:04:42.907 09:37:06 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.907 09:37:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:42.907 09:37:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.907 09:37:06 thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.907 ************************************ 00:04:42.907 START TEST thread_poller_perf 00:04:42.907 ************************************ 00:04:42.907 09:37:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:42.907 [2024-11-20 09:37:06.127999] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:42.907 [2024-11-20 09:37:06.128060] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730199 ] 00:04:42.907 [2024-11-20 09:37:06.205481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.166 [2024-11-20 09:37:06.248473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.166 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:44.103 [2024-11-20T08:37:07.435Z] ====================================== 00:04:44.103 [2024-11-20T08:37:07.435Z] busy:2309265472 (cyc) 00:04:44.103 [2024-11-20T08:37:07.435Z] total_run_count: 413000 00:04:44.103 [2024-11-20T08:37:07.435Z] tsc_hz: 2300000000 (cyc) 00:04:44.103 [2024-11-20T08:37:07.435Z] ====================================== 00:04:44.103 [2024-11-20T08:37:07.435Z] poller_cost: 5591 (cyc), 2430 (nsec) 00:04:44.103 00:04:44.103 real 0m1.180s 00:04:44.103 user 0m1.103s 00:04:44.103 sys 0m0.072s 00:04:44.103 09:37:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.103 09:37:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.103 ************************************ 00:04:44.103 END TEST thread_poller_perf 00:04:44.103 ************************************ 00:04:44.103 09:37:07 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:44.103 09:37:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:44.103 09:37:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.103 09:37:07 thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.103 ************************************ 00:04:44.103 START TEST thread_poller_perf 00:04:44.103 ************************************ 00:04:44.103 09:37:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:44.103 [2024-11-20 09:37:07.382388] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:44.103 [2024-11-20 09:37:07.382457] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730450 ] 00:04:44.362 [2024-11-20 09:37:07.461080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.362 [2024-11-20 09:37:07.501016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.362 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:45.299 [2024-11-20T08:37:08.631Z] ====================================== 00:04:45.299 [2024-11-20T08:37:08.631Z] busy:2301526648 (cyc) 00:04:45.299 [2024-11-20T08:37:08.631Z] total_run_count: 5463000 00:04:45.299 [2024-11-20T08:37:08.631Z] tsc_hz: 2300000000 (cyc) 00:04:45.299 [2024-11-20T08:37:08.631Z] ====================================== 00:04:45.299 [2024-11-20T08:37:08.631Z] poller_cost: 421 (cyc), 183 (nsec) 00:04:45.299 00:04:45.299 real 0m1.176s 00:04:45.299 user 0m1.106s 00:04:45.299 sys 0m0.066s 00:04:45.299 09:37:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.299 09:37:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.299 ************************************ 00:04:45.299 END TEST thread_poller_perf 00:04:45.299 ************************************ 00:04:45.299 09:37:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:45.299 00:04:45.299 real 0m2.669s 00:04:45.299 user 0m2.369s 00:04:45.299 sys 0m0.315s 00:04:45.299 09:37:08 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.299 09:37:08 thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.299 ************************************ 00:04:45.299 END TEST thread 00:04:45.299 ************************************ 00:04:45.299 09:37:08 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:45.299 09:37:08 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:45.300 09:37:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.300 09:37:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.300 09:37:08 -- common/autotest_common.sh@10 -- # set +x 00:04:45.558 ************************************ 00:04:45.558 START TEST app_cmdline 00:04:45.558 ************************************ 00:04:45.558 09:37:08 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:45.558 * Looking for test storage... 00:04:45.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:45.558 09:37:08 app_cmdline -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:45.558 09:37:08 app_cmdline -- common/autotest_common.sh@1703 -- # lcov --version 00:04:45.558 09:37:08 app_cmdline -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:45.558 09:37:08 app_cmdline -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:45.558 09:37:08 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.559 09:37:08 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:45.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.559 --rc genhtml_branch_coverage=1 00:04:45.559 --rc genhtml_function_coverage=1 00:04:45.559 --rc genhtml_legend=1 00:04:45.559 --rc geninfo_all_blocks=1 00:04:45.559 --rc geninfo_unexecuted_blocks=1 00:04:45.559 00:04:45.559 ' 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:45.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.559 --rc genhtml_branch_coverage=1 00:04:45.559 --rc genhtml_function_coverage=1 00:04:45.559 --rc genhtml_legend=1 00:04:45.559 --rc geninfo_all_blocks=1 00:04:45.559 --rc geninfo_unexecuted_blocks=1 00:04:45.559 00:04:45.559 ' 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:45.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.559 --rc genhtml_branch_coverage=1 00:04:45.559 --rc genhtml_function_coverage=1 00:04:45.559 --rc genhtml_legend=1 00:04:45.559 --rc geninfo_all_blocks=1 00:04:45.559 --rc geninfo_unexecuted_blocks=1 00:04:45.559 00:04:45.559 ' 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:45.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.559 --rc genhtml_branch_coverage=1 00:04:45.559 --rc genhtml_function_coverage=1 00:04:45.559 --rc genhtml_legend=1 00:04:45.559 --rc geninfo_all_blocks=1 00:04:45.559 --rc geninfo_unexecuted_blocks=1 00:04:45.559 00:04:45.559 ' 00:04:45.559 09:37:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:45.559 09:37:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2730755 00:04:45.559 09:37:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2730755 00:04:45.559 09:37:08 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2730755 ']' 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.559 09:37:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:45.559 [2024-11-20 09:37:08.877471] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:45.559 [2024-11-20 09:37:08.877522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730755 ] 00:04:45.818 [2024-11-20 09:37:08.951991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.818 [2024-11-20 09:37:08.991834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.089 09:37:09 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.089 09:37:09 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:46.089 09:37:09 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:46.089 { 00:04:46.090 "version": "SPDK v25.01-pre git sha1 27a4d33d8", 00:04:46.090 "fields": { 00:04:46.090 "major": 25, 00:04:46.090 "minor": 1, 00:04:46.090 "patch": 0, 00:04:46.090 "suffix": "-pre", 00:04:46.090 "commit": "27a4d33d8" 00:04:46.090 } 00:04:46.090 } 00:04:46.090 09:37:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:46.090 09:37:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:46.090 09:37:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:46.090 09:37:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:46.090 09:37:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:46.090 09:37:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:46.090 09:37:09 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.090 09:37:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:46.090 09:37:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.358 09:37:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:46.358 09:37:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:46.358 09:37:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:46.358 request: 00:04:46.358 { 00:04:46.358 "method": "env_dpdk_get_mem_stats", 00:04:46.358 "req_id": 1 00:04:46.358 } 00:04:46.358 Got JSON-RPC error response 00:04:46.358 response: 00:04:46.358 { 00:04:46.358 "code": -32601, 00:04:46.358 "message": "Method not found" 00:04:46.358 } 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.358 09:37:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2730755 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2730755 ']' 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2730755 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.358 09:37:09 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2730755 00:04:46.617 09:37:09 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.617 09:37:09 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.617 09:37:09 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2730755' 00:04:46.617 killing process with pid 2730755 00:04:46.617 09:37:09 app_cmdline -- common/autotest_common.sh@973 -- # kill 2730755 00:04:46.617 09:37:09 app_cmdline -- common/autotest_common.sh@978 -- # wait 2730755 00:04:46.877 00:04:46.877 real 0m1.354s 00:04:46.877 user 0m1.585s 00:04:46.877 sys 0m0.459s 00:04:46.877 09:37:10 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.877 09:37:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:46.877 ************************************ 00:04:46.877 END TEST app_cmdline 00:04:46.877 ************************************ 00:04:46.877 09:37:10 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:46.877 09:37:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.877 09:37:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.877 09:37:10 -- common/autotest_common.sh@10 -- # set +x 00:04:46.877 ************************************ 00:04:46.877 START TEST version 00:04:46.877 ************************************ 00:04:46.877 09:37:10 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:46.877 * Looking for test storage... 00:04:46.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:46.877 09:37:10 version -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:46.877 09:37:10 version -- common/autotest_common.sh@1703 -- # lcov --version 00:04:46.877 09:37:10 version -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:47.137 09:37:10 version -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:47.137 09:37:10 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.137 09:37:10 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.137 09:37:10 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.137 09:37:10 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.137 09:37:10 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.137 09:37:10 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.137 09:37:10 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.137 09:37:10 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.137 09:37:10 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.137 09:37:10 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.137 09:37:10 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.137 09:37:10 version -- scripts/common.sh@344 -- # case "$op" in 00:04:47.138 09:37:10 version -- scripts/common.sh@345 -- # : 1 00:04:47.138 09:37:10 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.138 09:37:10 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.138 09:37:10 version -- scripts/common.sh@365 -- # decimal 1 00:04:47.138 09:37:10 version -- scripts/common.sh@353 -- # local d=1 00:04:47.138 09:37:10 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.138 09:37:10 version -- scripts/common.sh@355 -- # echo 1 00:04:47.138 09:37:10 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.138 09:37:10 version -- scripts/common.sh@366 -- # decimal 2 00:04:47.138 09:37:10 version -- scripts/common.sh@353 -- # local d=2 00:04:47.138 09:37:10 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.138 09:37:10 version -- scripts/common.sh@355 -- # echo 2 00:04:47.138 09:37:10 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.138 09:37:10 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.138 09:37:10 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.138 09:37:10 version -- scripts/common.sh@368 -- # return 0 00:04:47.138 09:37:10 version -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.138 09:37:10 version -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:47.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.138 --rc genhtml_branch_coverage=1 00:04:47.138 --rc genhtml_function_coverage=1 00:04:47.138 --rc genhtml_legend=1 00:04:47.138 --rc geninfo_all_blocks=1 00:04:47.138 --rc geninfo_unexecuted_blocks=1 00:04:47.138 00:04:47.138 ' 00:04:47.138 09:37:10 version -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:47.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.138 --rc genhtml_branch_coverage=1 00:04:47.138 --rc genhtml_function_coverage=1 00:04:47.138 --rc genhtml_legend=1 00:04:47.138 --rc geninfo_all_blocks=1 00:04:47.138 --rc geninfo_unexecuted_blocks=1 00:04:47.138 00:04:47.138 ' 00:04:47.138 09:37:10 version -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:47.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.138 --rc genhtml_branch_coverage=1 00:04:47.138 --rc genhtml_function_coverage=1 00:04:47.138 --rc genhtml_legend=1 00:04:47.138 --rc geninfo_all_blocks=1 00:04:47.138 --rc geninfo_unexecuted_blocks=1 00:04:47.138 00:04:47.138 ' 00:04:47.138 09:37:10 version -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:47.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.138 --rc genhtml_branch_coverage=1 00:04:47.138 --rc genhtml_function_coverage=1 00:04:47.138 --rc genhtml_legend=1 00:04:47.138 --rc geninfo_all_blocks=1 00:04:47.138 --rc geninfo_unexecuted_blocks=1 00:04:47.138 00:04:47.138 ' 00:04:47.138 09:37:10 version -- app/version.sh@17 -- # get_header_version major 00:04:47.138 09:37:10 version -- app/version.sh@14 -- # cut -f2 00:04:47.138 09:37:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:47.138 09:37:10 version -- app/version.sh@14 -- # tr -d '"' 00:04:47.138 09:37:10 version -- app/version.sh@17 -- # major=25 00:04:47.138 09:37:10 version -- app/version.sh@18 -- # get_header_version minor 00:04:47.138 09:37:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:47.138 09:37:10 version -- app/version.sh@14 -- # cut -f2 00:04:47.138 09:37:10 version -- app/version.sh@14 -- # tr -d '"' 00:04:47.138 09:37:10 version -- app/version.sh@18 -- # minor=1 00:04:47.138 09:37:10 version -- app/version.sh@19 -- # get_header_version patch 00:04:47.138 09:37:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:47.138 09:37:10 version -- app/version.sh@14 -- # cut -f2 00:04:47.138 09:37:10 version -- app/version.sh@14 -- # tr -d '"' 00:04:47.138 09:37:10 version -- app/version.sh@19 -- # patch=0 00:04:47.138 09:37:10 version -- app/version.sh@20 -- # get_header_version suffix 00:04:47.138 09:37:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:47.138 09:37:10 version -- app/version.sh@14 -- # cut -f2 00:04:47.138 09:37:10 version -- app/version.sh@14 -- # tr -d '"' 00:04:47.138 09:37:10 version -- app/version.sh@20 -- # suffix=-pre 00:04:47.138 09:37:10 version -- app/version.sh@22 -- # version=25.1 00:04:47.138 09:37:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:47.138 09:37:10 version -- app/version.sh@28 -- # version=25.1rc0 00:04:47.138 09:37:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:47.138 09:37:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:47.138 09:37:10 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:47.138 09:37:10 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:47.138 00:04:47.138 real 0m0.244s 00:04:47.138 user 0m0.153s 00:04:47.138 sys 0m0.132s 00:04:47.138 09:37:10 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.138 09:37:10 version -- common/autotest_common.sh@10 -- # set +x 00:04:47.138 ************************************ 00:04:47.138 END TEST version 00:04:47.138 ************************************ 00:04:47.138 09:37:10 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:47.138 09:37:10 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:47.138 09:37:10 -- spdk/autotest.sh@194 -- # uname -s 00:04:47.138 09:37:10 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:47.138 09:37:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:47.138 09:37:10 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:47.138 09:37:10 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:47.138 09:37:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:47.138 09:37:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:47.138 09:37:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.138 09:37:10 -- common/autotest_common.sh@10 -- # set +x 00:04:47.138 09:37:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:47.138 09:37:10 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:47.138 09:37:10 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:47.138 09:37:10 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:47.138 09:37:10 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:47.138 09:37:10 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:47.138 09:37:10 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:47.138 09:37:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.138 09:37:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.138 09:37:10 -- common/autotest_common.sh@10 -- # set +x 00:04:47.138 ************************************ 00:04:47.138 START TEST nvmf_tcp 00:04:47.138 ************************************ 00:04:47.138 09:37:10 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:47.398 * Looking for test storage... 00:04:47.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1703 -- # lcov --version 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.398 09:37:10 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:47.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.398 --rc genhtml_branch_coverage=1 00:04:47.398 --rc genhtml_function_coverage=1 00:04:47.398 --rc genhtml_legend=1 00:04:47.398 --rc geninfo_all_blocks=1 00:04:47.398 --rc geninfo_unexecuted_blocks=1 00:04:47.398 00:04:47.398 ' 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:47.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.398 --rc genhtml_branch_coverage=1 00:04:47.398 --rc genhtml_function_coverage=1 00:04:47.398 --rc genhtml_legend=1 00:04:47.398 --rc geninfo_all_blocks=1 00:04:47.398 --rc geninfo_unexecuted_blocks=1 00:04:47.398 00:04:47.398 ' 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:47.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.398 --rc genhtml_branch_coverage=1 00:04:47.398 --rc genhtml_function_coverage=1 00:04:47.398 --rc genhtml_legend=1 00:04:47.398 --rc geninfo_all_blocks=1 00:04:47.398 --rc geninfo_unexecuted_blocks=1 00:04:47.398 00:04:47.398 ' 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:47.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.398 --rc genhtml_branch_coverage=1 00:04:47.398 --rc genhtml_function_coverage=1 00:04:47.398 --rc genhtml_legend=1 00:04:47.398 --rc geninfo_all_blocks=1 00:04:47.398 --rc geninfo_unexecuted_blocks=1 00:04:47.398 00:04:47.398 ' 00:04:47.398 09:37:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:47.398 09:37:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:47.398 09:37:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.398 09:37:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.398 ************************************ 00:04:47.398 START TEST nvmf_target_core 00:04:47.398 ************************************ 00:04:47.398 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:47.398 * Looking for test storage... 00:04:47.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1703 -- # lcov --version 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:47.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.658 --rc genhtml_branch_coverage=1 00:04:47.658 --rc genhtml_function_coverage=1 00:04:47.658 --rc genhtml_legend=1 00:04:47.658 --rc geninfo_all_blocks=1 00:04:47.658 --rc geninfo_unexecuted_blocks=1 00:04:47.658 00:04:47.658 ' 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:47.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.658 --rc genhtml_branch_coverage=1 00:04:47.658 --rc genhtml_function_coverage=1 00:04:47.658 --rc genhtml_legend=1 00:04:47.658 --rc geninfo_all_blocks=1 00:04:47.658 --rc geninfo_unexecuted_blocks=1 00:04:47.658 00:04:47.658 ' 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:47.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.658 --rc genhtml_branch_coverage=1 00:04:47.658 --rc genhtml_function_coverage=1 00:04:47.658 --rc genhtml_legend=1 00:04:47.658 --rc geninfo_all_blocks=1 00:04:47.658 --rc geninfo_unexecuted_blocks=1 00:04:47.658 00:04:47.658 ' 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:47.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.658 --rc genhtml_branch_coverage=1 00:04:47.658 --rc genhtml_function_coverage=1 00:04:47.658 --rc genhtml_legend=1 00:04:47.658 --rc geninfo_all_blocks=1 00:04:47.658 --rc geninfo_unexecuted_blocks=1 00:04:47.658 00:04:47.658 ' 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.658 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:47.659 ************************************ 00:04:47.659 START TEST nvmf_abort 00:04:47.659 ************************************ 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:47.659 * Looking for test storage... 00:04:47.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # lcov --version 00:04:47.659 09:37:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:47.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.919 --rc genhtml_branch_coverage=1 00:04:47.919 --rc genhtml_function_coverage=1 00:04:47.919 --rc genhtml_legend=1 00:04:47.919 --rc geninfo_all_blocks=1 00:04:47.919 --rc geninfo_unexecuted_blocks=1 00:04:47.919 00:04:47.919 ' 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:47.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.919 --rc genhtml_branch_coverage=1 00:04:47.919 --rc genhtml_function_coverage=1 00:04:47.919 --rc genhtml_legend=1 00:04:47.919 --rc geninfo_all_blocks=1 00:04:47.919 --rc geninfo_unexecuted_blocks=1 00:04:47.919 00:04:47.919 ' 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:47.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.919 --rc genhtml_branch_coverage=1 00:04:47.919 --rc genhtml_function_coverage=1 00:04:47.919 --rc genhtml_legend=1 00:04:47.919 --rc geninfo_all_blocks=1 00:04:47.919 --rc geninfo_unexecuted_blocks=1 00:04:47.919 00:04:47.919 ' 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:47.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.919 --rc genhtml_branch_coverage=1 00:04:47.919 --rc genhtml_function_coverage=1 00:04:47.919 --rc genhtml_legend=1 00:04:47.919 --rc geninfo_all_blocks=1 00:04:47.919 --rc geninfo_unexecuted_blocks=1 00:04:47.919 00:04:47.919 ' 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.919 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:47.920 09:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:54.497 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:54.497 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:54.497 Found net devices under 0000:86:00.0: cvl_0_0 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:54.497 Found net devices under 0000:86:00.1: cvl_0_1 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:54.497 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:54.498 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:54.498 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:54.498 09:37:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:54.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:54.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:04:54.498 00:04:54.498 --- 10.0.0.2 ping statistics --- 00:04:54.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:54.498 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:54.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:54.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:04:54.498 00:04:54.498 --- 10.0.0.1 ping statistics --- 00:04:54.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:54.498 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2734432 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2734432 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2734432 ']' 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 [2024-11-20 09:37:17.194488] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:04:54.498 [2024-11-20 09:37:17.194531] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:54.498 [2024-11-20 09:37:17.273774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.498 [2024-11-20 09:37:17.317368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:54.498 [2024-11-20 09:37:17.317406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:54.498 [2024-11-20 09:37:17.317413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:54.498 [2024-11-20 09:37:17.317419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:54.498 [2024-11-20 09:37:17.317424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:54.498 [2024-11-20 09:37:17.318806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.498 [2024-11-20 09:37:17.318912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.498 [2024-11-20 09:37:17.318913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 [2024-11-20 09:37:17.459756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 Malloc0 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 Delay0 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 [2024-11-20 09:37:17.533135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.498 09:37:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:54.498 [2024-11-20 09:37:17.669713] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:57.036 Initializing NVMe Controllers 00:04:57.036 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:57.036 controller IO queue size 128 less than required 00:04:57.036 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:57.036 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:57.036 Initialization complete. Launching workers. 00:04:57.036 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36861 00:04:57.036 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36922, failed to submit 62 00:04:57.036 success 36865, unsuccessful 57, failed 0 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:57.036 rmmod nvme_tcp 00:04:57.036 rmmod nvme_fabrics 00:04:57.036 rmmod nvme_keyring 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:57.036 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2734432 ']' 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2734432 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2734432 ']' 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2734432 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2734432 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2734432' 00:04:57.037 killing process with pid 2734432 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2734432 00:04:57.037 09:37:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2734432 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:57.037 09:37:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:58.941 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:58.941 00:04:58.941 real 0m11.235s 00:04:58.941 user 0m11.617s 00:04:58.941 sys 0m5.442s 00:04:58.941 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.941 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.941 ************************************ 00:04:58.941 END TEST nvmf_abort 00:04:58.941 ************************************ 00:04:58.941 09:37:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:58.941 09:37:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:58.941 09:37:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.941 09:37:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:58.941 ************************************ 00:04:58.941 START TEST nvmf_ns_hotplug_stress 00:04:58.941 ************************************ 00:04:58.941 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:59.202 * Looking for test storage... 00:04:59.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # lcov --version 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:04:59.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.202 --rc genhtml_branch_coverage=1 00:04:59.202 --rc genhtml_function_coverage=1 00:04:59.202 --rc genhtml_legend=1 00:04:59.202 --rc geninfo_all_blocks=1 00:04:59.202 --rc geninfo_unexecuted_blocks=1 00:04:59.202 00:04:59.202 ' 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:04:59.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.202 --rc genhtml_branch_coverage=1 00:04:59.202 --rc genhtml_function_coverage=1 00:04:59.202 --rc genhtml_legend=1 00:04:59.202 --rc geninfo_all_blocks=1 00:04:59.202 --rc geninfo_unexecuted_blocks=1 00:04:59.202 00:04:59.202 ' 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:04:59.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.202 --rc genhtml_branch_coverage=1 00:04:59.202 --rc genhtml_function_coverage=1 00:04:59.202 --rc genhtml_legend=1 00:04:59.202 --rc geninfo_all_blocks=1 00:04:59.202 --rc geninfo_unexecuted_blocks=1 00:04:59.202 00:04:59.202 ' 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:04:59.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.202 --rc genhtml_branch_coverage=1 00:04:59.202 --rc genhtml_function_coverage=1 00:04:59.202 --rc genhtml_legend=1 00:04:59.202 --rc geninfo_all_blocks=1 00:04:59.202 --rc geninfo_unexecuted_blocks=1 00:04:59.202 00:04:59.202 ' 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.202 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:59.203 09:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:05.780 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:05.780 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:05.780 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:05.781 Found net devices under 0000:86:00.0: cvl_0_0 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:05.781 Found net devices under 0000:86:00.1: cvl_0_1 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:05.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:05.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:05:05.781 00:05:05.781 --- 10.0.0.2 ping statistics --- 00:05:05.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:05.781 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:05.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:05.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:05:05.781 00:05:05.781 --- 10.0.0.1 ping statistics --- 00:05:05.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:05.781 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2738457 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2738457 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2738457 ']' 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.781 [2024-11-20 09:37:28.513690] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:05:05.781 [2024-11-20 09:37:28.513737] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:05.781 [2024-11-20 09:37:28.595503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.781 [2024-11-20 09:37:28.635714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:05.781 [2024-11-20 09:37:28.635752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:05.781 [2024-11-20 09:37:28.635759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:05.781 [2024-11-20 09:37:28.635765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:05.781 [2024-11-20 09:37:28.635769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:05.781 [2024-11-20 09:37:28.637254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.781 [2024-11-20 09:37:28.637345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.781 [2024-11-20 09:37:28.637345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:05.781 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.782 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:05.782 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:05.782 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:05.782 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:05.782 [2024-11-20 09:37:28.958286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.782 09:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:06.041 09:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:06.041 [2024-11-20 09:37:29.371786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:06.301 09:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:06.301 09:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:06.560 Malloc0 00:05:06.560 09:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:06.818 Delay0 00:05:06.818 09:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.077 09:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:07.077 NULL1 00:05:07.077 09:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:07.336 09:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:07.337 09:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2738938 00:05:07.337 09:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:07.337 09:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.717 Read completed with error (sct=0, sc=11) 00:05:08.717 09:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.717 09:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:08.717 09:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:08.976 true 00:05:08.976 09:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:08.976 09:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.914 09:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.914 09:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:09.914 09:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:10.173 true 00:05:10.173 09:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:10.173 09:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.432 09:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.691 09:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:10.691 09:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:10.691 true 00:05:10.691 09:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:10.691 09:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.069 09:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.069 09:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:12.069 09:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:12.328 true 00:05:12.328 09:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:12.328 09:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.587 09:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.587 09:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:12.587 09:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:12.846 true 00:05:12.846 09:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:12.846 09:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.224 09:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.224 09:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:14.224 09:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:14.482 true 00:05:14.482 09:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:14.482 09:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.419 09:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.419 09:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:15.419 09:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:15.678 true 00:05:15.678 09:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:15.678 09:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.937 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.196 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:16.196 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:16.196 true 00:05:16.196 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:16.196 09:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.643 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.643 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:17.643 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:17.643 true 00:05:17.643 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:17.643 09:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.581 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.840 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:18.840 09:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:18.840 true 00:05:19.099 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:19.099 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.099 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.356 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:19.356 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:19.615 true 00:05:19.615 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:19.615 09:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.993 09:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.993 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:20.993 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:20.993 true 00:05:21.253 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:21.253 09:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.823 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.081 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:22.081 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:22.341 true 00:05:22.341 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:22.341 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.600 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.960 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:22.960 09:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:22.960 true 00:05:22.961 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:22.961 09:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.342 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.342 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.342 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:24.342 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:24.602 true 00:05:24.602 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:24.602 09:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.540 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.540 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:25.540 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:25.799 true 00:05:25.799 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:25.799 09:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.799 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.057 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:26.057 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:26.315 true 00:05:26.315 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:26.315 09:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.693 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.693 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:27.693 09:37:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:27.952 true 00:05:27.952 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:27.952 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.889 09:37:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.889 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:28.889 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:29.147 true 00:05:29.147 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:29.147 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.406 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.406 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:29.406 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:29.666 true 00:05:29.666 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:29.666 09:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.046 09:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.046 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:31.046 09:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:31.046 09:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:31.305 true 00:05:31.305 09:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:31.305 09:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.244 09:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.244 09:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:32.244 09:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:32.504 true 00:05:32.504 09:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:32.504 09:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.763 09:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.763 09:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:32.763 09:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:33.023 true 00:05:33.023 09:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:33.023 09:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.960 09:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.219 09:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:34.219 09:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:34.478 true 00:05:34.478 09:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:34.478 09:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.738 09:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.997 09:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:34.997 09:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:34.997 true 00:05:34.997 09:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:34.997 09:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.376 09:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.376 09:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:36.376 09:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:36.376 true 00:05:36.376 09:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:36.376 09:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.636 09:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.895 09:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:36.895 09:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:37.154 true 00:05:37.154 09:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:37.154 09:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.091 09:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.091 Initializing NVMe Controllers 00:05:38.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:38.091 Controller IO queue size 128, less than required. 00:05:38.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:38.091 Controller IO queue size 128, less than required. 00:05:38.091 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:38.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:38.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:38.091 Initialization complete. Launching workers. 00:05:38.091 ======================================================== 00:05:38.091 Latency(us) 00:05:38.091 Device Information : IOPS MiB/s Average min max 00:05:38.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1714.50 0.84 50925.17 2542.66 1084427.52 00:05:38.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16996.23 8.30 7530.69 1616.18 385719.83 00:05:38.091 ======================================================== 00:05:38.091 Total : 18710.73 9.14 11507.01 1616.18 1084427.52 00:05:38.091 00:05:38.350 09:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:38.350 09:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:38.350 true 00:05:38.609 09:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2738938 00:05:38.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2738938) - No such process 00:05:38.609 09:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2738938 00:05:38.609 09:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.609 09:38:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:38.868 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:38.868 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:38.868 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:38.868 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:38.868 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:39.127 null0 00:05:39.127 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.127 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.127 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:39.386 null1 00:05:39.386 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.386 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.386 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:39.386 null2 00:05:39.386 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.386 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.386 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:39.645 null3 00:05:39.645 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.645 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.645 09:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:39.904 null4 00:05:39.904 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:39.904 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:39.904 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:40.163 null5 00:05:40.163 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.163 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.163 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:40.163 null6 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:40.423 null7 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.423 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2744545 2744546 2744548 2744551 2744552 2744554 2744555 2744557 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.424 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.683 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.683 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:40.683 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:40.683 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:40.683 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:40.683 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:40.683 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:40.683 09:38:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:40.942 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:40.943 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.203 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.203 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.203 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.203 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.203 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.203 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.203 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.203 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.462 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.722 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.722 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.722 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.722 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.722 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:41.722 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.722 09:38:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:41.722 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:41.980 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:41.981 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.981 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:41.981 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:41.981 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:41.981 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:41.981 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:41.981 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.239 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.240 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.498 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.498 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.498 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:42.498 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:42.498 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.498 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.498 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.498 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.756 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:42.757 09:38:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:42.757 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:42.757 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:42.757 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:42.757 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:42.757 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.016 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.275 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.275 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.275 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.275 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.275 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.275 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.275 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.275 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.534 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:43.793 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:43.793 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:43.793 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:43.793 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:43.793 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.793 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:43.793 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:43.793 09:38:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:43.793 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:43.793 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:43.793 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.052 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.311 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.569 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.569 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.570 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.570 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.570 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.570 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.570 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.570 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.829 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.829 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.829 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.829 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.830 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.830 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.830 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.830 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.830 09:38:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:44.830 rmmod nvme_tcp 00:05:44.830 rmmod nvme_fabrics 00:05:44.830 rmmod nvme_keyring 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2738457 ']' 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2738457 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2738457 ']' 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2738457 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2738457 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2738457' 00:05:44.830 killing process with pid 2738457 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2738457 00:05:44.830 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2738457 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.089 09:38:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:47.639 00:05:47.639 real 0m48.192s 00:05:47.639 user 3m15.077s 00:05:47.639 sys 0m15.897s 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 ************************************ 00:05:47.639 END TEST nvmf_ns_hotplug_stress 00:05:47.639 ************************************ 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 ************************************ 00:05:47.639 START TEST nvmf_delete_subsystem 00:05:47.639 ************************************ 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:47.639 * Looking for test storage... 00:05:47.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # lcov --version 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:47.639 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:05:47.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.640 --rc genhtml_branch_coverage=1 00:05:47.640 --rc genhtml_function_coverage=1 00:05:47.640 --rc genhtml_legend=1 00:05:47.640 --rc geninfo_all_blocks=1 00:05:47.640 --rc geninfo_unexecuted_blocks=1 00:05:47.640 00:05:47.640 ' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:05:47.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.640 --rc genhtml_branch_coverage=1 00:05:47.640 --rc genhtml_function_coverage=1 00:05:47.640 --rc genhtml_legend=1 00:05:47.640 --rc geninfo_all_blocks=1 00:05:47.640 --rc geninfo_unexecuted_blocks=1 00:05:47.640 00:05:47.640 ' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:05:47.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.640 --rc genhtml_branch_coverage=1 00:05:47.640 --rc genhtml_function_coverage=1 00:05:47.640 --rc genhtml_legend=1 00:05:47.640 --rc geninfo_all_blocks=1 00:05:47.640 --rc geninfo_unexecuted_blocks=1 00:05:47.640 00:05:47.640 ' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:05:47.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.640 --rc genhtml_branch_coverage=1 00:05:47.640 --rc genhtml_function_coverage=1 00:05:47.640 --rc genhtml_legend=1 00:05:47.640 --rc geninfo_all_blocks=1 00:05:47.640 --rc geninfo_unexecuted_blocks=1 00:05:47.640 00:05:47.640 ' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:47.640 09:38:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:54.213 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:54.214 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:54.214 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:54.214 Found net devices under 0000:86:00.0: cvl_0_0 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:54.214 Found net devices under 0000:86:00.1: cvl_0_1 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:54.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:54.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:05:54.214 00:05:54.214 --- 10.0.0.2 ping statistics --- 00:05:54.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.214 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:54.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:54.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:05:54.214 00:05:54.214 --- 10.0.0.1 ping statistics --- 00:05:54.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:54.214 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:54.214 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2748946 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2748946 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2748946 ']' 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.215 [2024-11-20 09:38:16.745595] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:05:54.215 [2024-11-20 09:38:16.745647] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:54.215 [2024-11-20 09:38:16.826054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.215 [2024-11-20 09:38:16.866032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:54.215 [2024-11-20 09:38:16.866070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:54.215 [2024-11-20 09:38:16.866077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.215 [2024-11-20 09:38:16.866083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.215 [2024-11-20 09:38:16.866088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:54.215 [2024-11-20 09:38:16.867357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.215 [2024-11-20 09:38:16.867358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.215 09:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.215 [2024-11-20 09:38:17.015685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.215 [2024-11-20 09:38:17.035885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.215 NULL1 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.215 Delay0 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2748970 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:54.215 09:38:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:54.215 [2024-11-20 09:38:17.147681] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:56.119 09:38:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:56.119 09:38:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.119 09:38:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Write completed with error (sct=0, sc=8) 00:05:56.119 starting I/O failed: -6 00:05:56.119 Write completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Write completed with error (sct=0, sc=8) 00:05:56.119 starting I/O failed: -6 00:05:56.119 Write completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Write completed with error (sct=0, sc=8) 00:05:56.119 starting I/O failed: -6 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 starting I/O failed: -6 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Write completed with error (sct=0, sc=8) 00:05:56.119 starting I/O failed: -6 00:05:56.119 Write completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 starting I/O failed: -6 00:05:56.119 Write completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.119 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 [2024-11-20 09:38:19.303437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1181960 is same with the state(6) to be set 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Write completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 Read completed with error (sct=0, sc=8) 00:05:56.120 starting I/O failed: -6 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 Write completed with error (sct=0, sc=8) 00:05:56.121 starting I/O failed: -6 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 starting I/O failed: -6 00:05:56.121 Write completed with error (sct=0, sc=8) 00:05:56.121 Write completed with error (sct=0, sc=8) 00:05:56.121 starting I/O failed: -6 00:05:56.121 Write completed with error (sct=0, sc=8) 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 starting I/O failed: -6 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 Write completed with error (sct=0, sc=8) 00:05:56.121 starting I/O failed: -6 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 Write completed with error (sct=0, sc=8) 00:05:56.121 starting I/O failed: -6 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 starting I/O failed: -6 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 Write completed with error (sct=0, sc=8) 00:05:56.121 starting I/O failed: -6 00:05:56.121 Read completed with error (sct=0, sc=8) 00:05:56.121 Write completed with error (sct=0, sc=8) 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:56.121 starting I/O failed: -6 00:05:57.059 [2024-11-20 09:38:20.282608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11829a0 is same with the state(6) to be set 00:05:57.059 Read completed with error (sct=0, sc=8) 00:05:57.059 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 [2024-11-20 09:38:20.307962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11812c0 is same with the state(6) to be set 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 [2024-11-20 09:38:20.308157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1181b40 is same with the state(6) to be set 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 [2024-11-20 09:38:20.310080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ccc00d020 is same with the state(6) to be set 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Write completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.060 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Read completed with error (sct=0, sc=8) 00:05:57.061 Write completed with error (sct=0, sc=8) 00:05:57.061 [2024-11-20 09:38:20.310776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5ccc00d680 is same with the state(6) to be set 00:05:57.061 Initializing NVMe Controllers 00:05:57.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:57.061 Controller IO queue size 128, less than required. 00:05:57.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:57.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:57.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:57.061 Initialization complete. Launching workers. 00:05:57.061 ======================================================== 00:05:57.061 Latency(us) 00:05:57.061 Device Information : IOPS MiB/s Average min max 00:05:57.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.19 0.09 890994.91 375.01 1006662.80 00:05:57.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.78 0.08 983592.88 268.21 2001341.16 00:05:57.061 ======================================================== 00:05:57.061 Total : 360.97 0.18 934547.88 268.21 2001341.16 00:05:57.061 00:05:57.061 [2024-11-20 09:38:20.311373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11829a0 (9): Bad file descriptor 00:05:57.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:57.061 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.061 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:57.061 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2748970 00:05:57.061 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2748970 00:05:57.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2748970) - No such process 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2748970 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2748970 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2748970 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.630 [2024-11-20 09:38:20.842118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2749660 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2749660 00:05:57.630 09:38:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:57.630 [2024-11-20 09:38:20.929800] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:58.198 09:38:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.198 09:38:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2749660 00:05:58.198 09:38:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:58.766 09:38:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:58.766 09:38:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2749660 00:05:58.766 09:38:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.336 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.336 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2749660 00:05:59.336 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:59.595 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:59.595 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2749660 00:05:59.595 09:38:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.163 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:00.163 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2749660 00:06:00.163 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.733 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:00.733 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2749660 00:06:00.733 09:38:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:00.992 Initializing NVMe Controllers 00:06:00.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:00.992 Controller IO queue size 128, less than required. 00:06:00.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:00.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:00.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:00.992 Initialization complete. Launching workers. 00:06:00.992 ======================================================== 00:06:00.992 Latency(us) 00:06:00.992 Device Information : IOPS MiB/s Average min max 00:06:00.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001994.91 1000121.18 1005748.78 00:06:00.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004715.49 1000515.02 1041375.33 00:06:00.992 ======================================================== 00:06:00.992 Total : 256.00 0.12 1003355.20 1000121.18 1041375.33 00:06:00.992 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2749660 00:06:01.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2749660) - No such process 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2749660 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:01.251 rmmod nvme_tcp 00:06:01.251 rmmod nvme_fabrics 00:06:01.251 rmmod nvme_keyring 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2748946 ']' 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2748946 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2748946 ']' 00:06:01.251 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2748946 00:06:01.252 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:01.252 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.252 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2748946 00:06:01.252 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.252 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.252 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2748946' 00:06:01.252 killing process with pid 2748946 00:06:01.252 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2748946 00:06:01.252 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2748946 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:01.511 09:38:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.417 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:03.417 00:06:03.417 real 0m16.269s 00:06:03.417 user 0m29.370s 00:06:03.417 sys 0m5.558s 00:06:03.417 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.417 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.417 ************************************ 00:06:03.417 END TEST nvmf_delete_subsystem 00:06:03.417 ************************************ 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:03.677 ************************************ 00:06:03.677 START TEST nvmf_host_management 00:06:03.677 ************************************ 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:03.677 * Looking for test storage... 00:06:03.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # lcov --version 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:06:03.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.677 --rc genhtml_branch_coverage=1 00:06:03.677 --rc genhtml_function_coverage=1 00:06:03.677 --rc genhtml_legend=1 00:06:03.677 --rc geninfo_all_blocks=1 00:06:03.677 --rc geninfo_unexecuted_blocks=1 00:06:03.677 00:06:03.677 ' 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:06:03.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.677 --rc genhtml_branch_coverage=1 00:06:03.677 --rc genhtml_function_coverage=1 00:06:03.677 --rc genhtml_legend=1 00:06:03.677 --rc geninfo_all_blocks=1 00:06:03.677 --rc geninfo_unexecuted_blocks=1 00:06:03.677 00:06:03.677 ' 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:06:03.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.677 --rc genhtml_branch_coverage=1 00:06:03.677 --rc genhtml_function_coverage=1 00:06:03.677 --rc genhtml_legend=1 00:06:03.677 --rc geninfo_all_blocks=1 00:06:03.677 --rc geninfo_unexecuted_blocks=1 00:06:03.677 00:06:03.677 ' 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:06:03.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.677 --rc genhtml_branch_coverage=1 00:06:03.677 --rc genhtml_function_coverage=1 00:06:03.677 --rc genhtml_legend=1 00:06:03.677 --rc geninfo_all_blocks=1 00:06:03.677 --rc geninfo_unexecuted_blocks=1 00:06:03.677 00:06:03.677 ' 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.677 09:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.677 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.677 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.677 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.677 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.677 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:03.938 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:03.939 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:03.939 09:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:10.512 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:10.512 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:10.512 Found net devices under 0000:86:00.0: cvl_0_0 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:10.512 Found net devices under 0000:86:00.1: cvl_0_1 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:10.512 09:38:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:10.512 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:10.512 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:10.512 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:10.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:10.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:06:10.512 00:06:10.512 --- 10.0.0.2 ping statistics --- 00:06:10.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.513 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:10.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:10.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:06:10.513 00:06:10.513 --- 10.0.0.1 ping statistics --- 00:06:10.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:10.513 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2753882 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2753882 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2753882 ']' 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.513 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.513 [2024-11-20 09:38:33.122453] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:06:10.513 [2024-11-20 09:38:33.122497] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.513 [2024-11-20 09:38:33.202847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.513 [2024-11-20 09:38:33.244012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:10.513 [2024-11-20 09:38:33.244052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:10.513 [2024-11-20 09:38:33.244059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.513 [2024-11-20 09:38:33.244065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.513 [2024-11-20 09:38:33.244070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:10.513 [2024-11-20 09:38:33.245610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.513 [2024-11-20 09:38:33.245646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.513 [2024-11-20 09:38:33.245730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.513 [2024-11-20 09:38:33.245732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:10.771 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.772 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:10.772 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:10.772 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.772 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.772 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:10.772 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:10.772 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.772 09:38:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.772 [2024-11-20 09:38:34.004754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.772 Malloc0 00:06:10.772 [2024-11-20 09:38:34.081492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.772 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2753960 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2753960 /var/tmp/bdevperf.sock 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2753960 ']' 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:11.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:11.031 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:11.031 { 00:06:11.031 "params": { 00:06:11.031 "name": "Nvme$subsystem", 00:06:11.031 "trtype": "$TEST_TRANSPORT", 00:06:11.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:11.031 "adrfam": "ipv4", 00:06:11.031 "trsvcid": "$NVMF_PORT", 00:06:11.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:11.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:11.031 "hdgst": ${hdgst:-false}, 00:06:11.031 "ddgst": ${ddgst:-false} 00:06:11.031 }, 00:06:11.031 "method": "bdev_nvme_attach_controller" 00:06:11.031 } 00:06:11.031 EOF 00:06:11.032 )") 00:06:11.032 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:11.032 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:11.032 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:11.032 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:11.032 "params": { 00:06:11.032 "name": "Nvme0", 00:06:11.032 "trtype": "tcp", 00:06:11.032 "traddr": "10.0.0.2", 00:06:11.032 "adrfam": "ipv4", 00:06:11.032 "trsvcid": "4420", 00:06:11.032 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:11.032 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:11.032 "hdgst": false, 00:06:11.032 "ddgst": false 00:06:11.032 }, 00:06:11.032 "method": "bdev_nvme_attach_controller" 00:06:11.032 }' 00:06:11.032 [2024-11-20 09:38:34.179418] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:06:11.032 [2024-11-20 09:38:34.179464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753960 ] 00:06:11.032 [2024-11-20 09:38:34.257553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.032 [2024-11-20 09:38:34.299077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.291 Running I/O for 10 seconds... 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:06:11.291 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:11.550 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:11.550 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:11.550 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:11.550 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:11.550 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.550 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.550 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=670 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 670 -ge 100 ']' 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.812 [2024-11-20 09:38:34.896206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270200 is same with the state(6) to be set 00:06:11.812 [2024-11-20 09:38:34.896282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270200 is same with the state(6) to be set 00:06:11.812 [2024-11-20 09:38:34.896291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270200 is same with the state(6) to be set 00:06:11.812 [2024-11-20 09:38:34.896298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270200 is same with the state(6) to be set 00:06:11.812 [2024-11-20 09:38:34.896304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270200 is same with the state(6) to be set 00:06:11.812 [2024-11-20 09:38:34.896311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270200 is same with the state(6) to be set 00:06:11.812 [2024-11-20 09:38:34.896317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1270200 is same with the state(6) to be set 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.812 [2024-11-20 09:38:34.901279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.812 [2024-11-20 09:38:34.901310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.812 [2024-11-20 09:38:34.901321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.812 [2024-11-20 09:38:34.901328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.812 [2024-11-20 09:38:34.901335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.812 [2024-11-20 09:38:34.901343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.812 [2024-11-20 09:38:34.901350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:11.812 [2024-11-20 09:38:34.901363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.812 [2024-11-20 09:38:34.901370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b500 is same with the state(6) to be set 00:06:11.812 [2024-11-20 09:38:34.901407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.812 [2024-11-20 09:38:34.901416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.812 [2024-11-20 09:38:34.901430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.812 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:11.812 [2024-11-20 09:38:34.901438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.812 [2024-11-20 09:38:34.901451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.812 [2024-11-20 09:38:34.901458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.812 [2024-11-20 09:38:34.901466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.812 [2024-11-20 09:38:34.901473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.813 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.901940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.901992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.902008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.902017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.902023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.902031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.902038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.902046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.902053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.902061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.813 [2024-11-20 09:38:34.902067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.813 [2024-11-20 09:38:34.902075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:11.814 [2024-11-20 09:38:34.902380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.902422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:11.814 [2024-11-20 09:38:34.902429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:11.814 [2024-11-20 09:38:34.903390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:11.814 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:11.814 00:06:11.814 Latency(us) 00:06:11.814 [2024-11-20T08:38:35.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:11.814 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:11.814 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:11.814 Verification LBA range: start 0x0 length 0x400 00:06:11.814 Nvme0n1 : 0.40 1904.41 119.03 158.70 0.00 30178.21 1674.02 27582.11 00:06:11.814 [2024-11-20T08:38:35.146Z] =================================================================================================================== 00:06:11.814 [2024-11-20T08:38:35.146Z] Total : 1904.41 119.03 158.70 0.00 30178.21 1674.02 27582.11 00:06:11.814 [2024-11-20 09:38:34.905774] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.814 [2024-11-20 09:38:34.905794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2b500 (9): Bad file descriptor 00:06:11.814 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.814 09:38:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:11.814 [2024-11-20 09:38:34.956043] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2753960 00:06:12.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2753960) - No such process 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:12.862 { 00:06:12.862 "params": { 00:06:12.862 "name": "Nvme$subsystem", 00:06:12.862 "trtype": "$TEST_TRANSPORT", 00:06:12.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:12.862 "adrfam": "ipv4", 00:06:12.862 "trsvcid": "$NVMF_PORT", 00:06:12.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:12.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:12.862 "hdgst": ${hdgst:-false}, 00:06:12.862 "ddgst": ${ddgst:-false} 00:06:12.862 }, 00:06:12.862 "method": "bdev_nvme_attach_controller" 00:06:12.862 } 00:06:12.862 EOF 00:06:12.862 )") 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:12.862 09:38:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:12.862 "params": { 00:06:12.862 "name": "Nvme0", 00:06:12.862 "trtype": "tcp", 00:06:12.862 "traddr": "10.0.0.2", 00:06:12.862 "adrfam": "ipv4", 00:06:12.862 "trsvcid": "4420", 00:06:12.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:12.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:12.862 "hdgst": false, 00:06:12.862 "ddgst": false 00:06:12.862 }, 00:06:12.862 "method": "bdev_nvme_attach_controller" 00:06:12.862 }' 00:06:12.862 [2024-11-20 09:38:35.966596] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:06:12.862 [2024-11-20 09:38:35.966644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2754413 ] 00:06:12.862 [2024-11-20 09:38:36.041584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.862 [2024-11-20 09:38:36.081338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.122 Running I/O for 1 seconds... 00:06:14.494 1984.00 IOPS, 124.00 MiB/s 00:06:14.494 Latency(us) 00:06:14.494 [2024-11-20T08:38:37.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:14.494 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:14.494 Verification LBA range: start 0x0 length 0x400 00:06:14.494 Nvme0n1 : 1.01 2034.22 127.14 0.00 0.00 30957.45 5328.36 27354.16 00:06:14.494 [2024-11-20T08:38:37.826Z] =================================================================================================================== 00:06:14.494 [2024-11-20T08:38:37.826Z] Total : 2034.22 127.14 0.00 0.00 30957.45 5328.36 27354.16 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:14.494 rmmod nvme_tcp 00:06:14.494 rmmod nvme_fabrics 00:06:14.494 rmmod nvme_keyring 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2753882 ']' 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2753882 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2753882 ']' 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2753882 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2753882 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2753882' 00:06:14.494 killing process with pid 2753882 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2753882 00:06:14.494 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2753882 00:06:14.752 [2024-11-20 09:38:37.841311] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.752 09:38:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.657 09:38:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:16.657 09:38:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:16.657 00:06:16.657 real 0m13.136s 00:06:16.657 user 0m22.628s 00:06:16.657 sys 0m5.652s 00:06:16.657 09:38:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.657 09:38:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.657 ************************************ 00:06:16.657 END TEST nvmf_host_management 00:06:16.657 ************************************ 00:06:16.657 09:38:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:16.657 09:38:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.657 09:38:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.657 09:38:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:16.915 ************************************ 00:06:16.915 START TEST nvmf_lvol 00:06:16.915 ************************************ 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:16.916 * Looking for test storage... 00:06:16.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # lcov --version 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:06:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.916 --rc genhtml_branch_coverage=1 00:06:16.916 --rc genhtml_function_coverage=1 00:06:16.916 --rc genhtml_legend=1 00:06:16.916 --rc geninfo_all_blocks=1 00:06:16.916 --rc geninfo_unexecuted_blocks=1 00:06:16.916 00:06:16.916 ' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:06:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.916 --rc genhtml_branch_coverage=1 00:06:16.916 --rc genhtml_function_coverage=1 00:06:16.916 --rc genhtml_legend=1 00:06:16.916 --rc geninfo_all_blocks=1 00:06:16.916 --rc geninfo_unexecuted_blocks=1 00:06:16.916 00:06:16.916 ' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:06:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.916 --rc genhtml_branch_coverage=1 00:06:16.916 --rc genhtml_function_coverage=1 00:06:16.916 --rc genhtml_legend=1 00:06:16.916 --rc geninfo_all_blocks=1 00:06:16.916 --rc geninfo_unexecuted_blocks=1 00:06:16.916 00:06:16.916 ' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:06:16.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.916 --rc genhtml_branch_coverage=1 00:06:16.916 --rc genhtml_function_coverage=1 00:06:16.916 --rc genhtml_legend=1 00:06:16.916 --rc geninfo_all_blocks=1 00:06:16.916 --rc geninfo_unexecuted_blocks=1 00:06:16.916 00:06:16.916 ' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:16.916 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:16.917 09:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:23.487 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:23.487 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:23.487 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:23.488 Found net devices under 0000:86:00.0: cvl_0_0 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:23.488 Found net devices under 0000:86:00.1: cvl_0_1 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:23.488 09:38:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:23.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:23.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:06:23.488 00:06:23.488 --- 10.0.0.2 ping statistics --- 00:06:23.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.488 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:23.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:23.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:06:23.488 00:06:23.488 --- 10.0.0.1 ping statistics --- 00:06:23.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:23.488 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2758191 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2758191 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2758191 ']' 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.488 09:38:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:23.488 [2024-11-20 09:38:46.322217] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:06:23.488 [2024-11-20 09:38:46.322268] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.488 [2024-11-20 09:38:46.402241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.488 [2024-11-20 09:38:46.445103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.488 [2024-11-20 09:38:46.445140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.488 [2024-11-20 09:38:46.445147] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.488 [2024-11-20 09:38:46.445153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.488 [2024-11-20 09:38:46.445158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.488 [2024-11-20 09:38:46.446468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.488 [2024-11-20 09:38:46.446579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.488 [2024-11-20 09:38:46.446580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.055 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.055 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:24.055 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:24.055 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.055 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:24.055 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.056 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:24.056 [2024-11-20 09:38:47.373880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.314 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:24.314 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:24.314 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:24.572 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:24.572 09:38:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:24.830 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:25.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b6b4a2fc-deb1-4394-bfb3-a1533ec43ecf 00:06:25.089 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b6b4a2fc-deb1-4394-bfb3-a1533ec43ecf lvol 20 00:06:25.348 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3d19f167-fa4b-48e7-bd7a-6db57ebe8212 00:06:25.348 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:25.607 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3d19f167-fa4b-48e7-bd7a-6db57ebe8212 00:06:25.607 09:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:25.866 [2024-11-20 09:38:49.036703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.866 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.124 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:26.125 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2758694 00:06:26.125 09:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:27.061 09:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3d19f167-fa4b-48e7-bd7a-6db57ebe8212 MY_SNAPSHOT 00:06:27.321 09:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8003f29b-e54c-45f1-8d60-975bba4b6d69 00:06:27.321 09:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3d19f167-fa4b-48e7-bd7a-6db57ebe8212 30 00:06:27.580 09:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8003f29b-e54c-45f1-8d60-975bba4b6d69 MY_CLONE 00:06:27.839 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=838c10f6-bd42-46c9-b766-d10840744b04 00:06:27.839 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 838c10f6-bd42-46c9-b766-d10840744b04 00:06:28.406 09:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2758694 00:06:36.529 Initializing NVMe Controllers 00:06:36.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:36.529 Controller IO queue size 128, less than required. 00:06:36.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:36.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:36.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:36.529 Initialization complete. Launching workers. 00:06:36.529 ======================================================== 00:06:36.529 Latency(us) 00:06:36.529 Device Information : IOPS MiB/s Average min max 00:06:36.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12151.80 47.47 10533.21 1936.78 60521.53 00:06:36.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12071.90 47.16 10604.91 3572.86 57613.44 00:06:36.529 ======================================================== 00:06:36.529 Total : 24223.70 94.62 10568.94 1936.78 60521.53 00:06:36.529 00:06:36.529 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:36.529 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3d19f167-fa4b-48e7-bd7a-6db57ebe8212 00:06:36.787 09:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6b4a2fc-deb1-4394-bfb3-a1533ec43ecf 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:37.045 rmmod nvme_tcp 00:06:37.045 rmmod nvme_fabrics 00:06:37.045 rmmod nvme_keyring 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2758191 ']' 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2758191 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2758191 ']' 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2758191 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.045 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2758191 00:06:37.046 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.046 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.046 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2758191' 00:06:37.046 killing process with pid 2758191 00:06:37.046 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2758191 00:06:37.046 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2758191 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.305 09:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:39.842 00:06:39.842 real 0m22.551s 00:06:39.842 user 1m4.858s 00:06:39.842 sys 0m7.689s 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:39.842 ************************************ 00:06:39.842 END TEST nvmf_lvol 00:06:39.842 ************************************ 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:39.842 ************************************ 00:06:39.842 START TEST nvmf_lvs_grow 00:06:39.842 ************************************ 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:39.842 * Looking for test storage... 00:06:39.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # lcov --version 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:06:39.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.842 --rc genhtml_branch_coverage=1 00:06:39.842 --rc genhtml_function_coverage=1 00:06:39.842 --rc genhtml_legend=1 00:06:39.842 --rc geninfo_all_blocks=1 00:06:39.842 --rc geninfo_unexecuted_blocks=1 00:06:39.842 00:06:39.842 ' 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:06:39.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.842 --rc genhtml_branch_coverage=1 00:06:39.842 --rc genhtml_function_coverage=1 00:06:39.842 --rc genhtml_legend=1 00:06:39.842 --rc geninfo_all_blocks=1 00:06:39.842 --rc geninfo_unexecuted_blocks=1 00:06:39.842 00:06:39.842 ' 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:06:39.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.842 --rc genhtml_branch_coverage=1 00:06:39.842 --rc genhtml_function_coverage=1 00:06:39.842 --rc genhtml_legend=1 00:06:39.842 --rc geninfo_all_blocks=1 00:06:39.842 --rc geninfo_unexecuted_blocks=1 00:06:39.842 00:06:39.842 ' 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:06:39.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.842 --rc genhtml_branch_coverage=1 00:06:39.842 --rc genhtml_function_coverage=1 00:06:39.842 --rc genhtml_legend=1 00:06:39.842 --rc geninfo_all_blocks=1 00:06:39.842 --rc geninfo_unexecuted_blocks=1 00:06:39.842 00:06:39.842 ' 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.842 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:39.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:39.843 09:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.412 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:46.413 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:46.413 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:46.413 Found net devices under 0000:86:00.0: cvl_0_0 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:46.413 Found net devices under 0000:86:00.1: cvl_0_1 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:06:46.413 00:06:46.413 --- 10.0.0.2 ping statistics --- 00:06:46.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.413 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:06:46.413 00:06:46.413 --- 10.0.0.1 ping statistics --- 00:06:46.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.413 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:46.413 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2764410 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2764410 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2764410 ']' 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.414 09:39:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.414 [2024-11-20 09:39:08.954117] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:06:46.414 [2024-11-20 09:39:08.954166] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.414 [2024-11-20 09:39:09.034315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.414 [2024-11-20 09:39:09.076579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.414 [2024-11-20 09:39:09.076614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.414 [2024-11-20 09:39:09.076625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.414 [2024-11-20 09:39:09.076632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.414 [2024-11-20 09:39:09.076637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.414 [2024-11-20 09:39:09.077209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.414 [2024-11-20 09:39:09.406410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.414 ************************************ 00:06:46.414 START TEST lvs_grow_clean 00:06:46.414 ************************************ 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:46.414 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:46.674 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:06:46.674 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:06:46.674 09:39:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:46.932 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:46.932 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:46.932 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e lvol 150 00:06:47.191 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e6cb1c8c-06e7-4cce-a17e-40761d3c8474 00:06:47.191 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:47.191 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:47.191 [2024-11-20 09:39:10.484983] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:47.191 [2024-11-20 09:39:10.485047] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:47.191 true 00:06:47.191 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:06:47.191 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:47.449 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:47.449 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:47.707 09:39:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e6cb1c8c-06e7-4cce-a17e-40761d3c8474 00:06:47.966 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:47.966 [2024-11-20 09:39:11.231232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.966 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2765095 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2765095 /var/tmp/bdevperf.sock 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2765095 ']' 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:48.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.225 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:48.225 [2024-11-20 09:39:11.469274] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:06:48.225 [2024-11-20 09:39:11.469320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765095 ] 00:06:48.225 [2024-11-20 09:39:11.543357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.483 [2024-11-20 09:39:11.585178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.483 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.483 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:48.483 09:39:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:48.741 Nvme0n1 00:06:48.741 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:48.999 [ 00:06:48.999 { 00:06:48.999 "name": "Nvme0n1", 00:06:48.999 "aliases": [ 00:06:48.999 "e6cb1c8c-06e7-4cce-a17e-40761d3c8474" 00:06:48.999 ], 00:06:48.999 "product_name": "NVMe disk", 00:06:48.999 "block_size": 4096, 00:06:48.999 "num_blocks": 38912, 00:06:48.999 "uuid": "e6cb1c8c-06e7-4cce-a17e-40761d3c8474", 00:06:48.999 "numa_id": 1, 00:06:48.999 "assigned_rate_limits": { 00:06:48.999 "rw_ios_per_sec": 0, 00:06:48.999 "rw_mbytes_per_sec": 0, 00:06:48.999 "r_mbytes_per_sec": 0, 00:06:48.999 "w_mbytes_per_sec": 0 00:06:48.999 }, 00:06:48.999 "claimed": false, 00:06:48.999 "zoned": false, 00:06:48.999 "supported_io_types": { 00:06:48.999 "read": true, 00:06:48.999 "write": true, 00:06:48.999 "unmap": true, 00:06:48.999 "flush": true, 00:06:48.999 "reset": true, 00:06:48.999 "nvme_admin": true, 00:06:48.999 "nvme_io": true, 00:06:48.999 "nvme_io_md": false, 00:06:48.999 "write_zeroes": true, 00:06:48.999 "zcopy": false, 00:06:48.999 "get_zone_info": false, 00:06:48.999 "zone_management": false, 00:06:48.999 "zone_append": false, 00:06:48.999 "compare": true, 00:06:48.999 "compare_and_write": true, 00:06:48.999 "abort": true, 00:06:48.999 "seek_hole": false, 00:06:48.999 "seek_data": false, 00:06:48.999 "copy": true, 00:06:48.999 "nvme_iov_md": false 00:06:48.999 }, 00:06:48.999 "memory_domains": [ 00:06:48.999 { 00:06:48.999 "dma_device_id": "system", 00:06:48.999 "dma_device_type": 1 00:06:48.999 } 00:06:48.999 ], 00:06:48.999 "driver_specific": { 00:06:48.999 "nvme": [ 00:06:48.999 { 00:06:48.999 "trid": { 00:06:48.999 "trtype": "TCP", 00:06:48.999 "adrfam": "IPv4", 00:06:48.999 "traddr": "10.0.0.2", 00:06:48.999 "trsvcid": "4420", 00:06:48.999 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:48.999 }, 00:06:48.999 "ctrlr_data": { 00:06:48.999 "cntlid": 1, 00:06:48.999 "vendor_id": "0x8086", 00:06:48.999 "model_number": "SPDK bdev Controller", 00:06:48.999 "serial_number": "SPDK0", 00:06:48.999 "firmware_revision": "25.01", 00:06:48.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:48.999 "oacs": { 00:06:48.999 "security": 0, 00:06:48.999 "format": 0, 00:06:48.999 "firmware": 0, 00:06:48.999 "ns_manage": 0 00:06:48.999 }, 00:06:48.999 "multi_ctrlr": true, 00:06:48.999 "ana_reporting": false 00:06:48.999 }, 00:06:48.999 "vs": { 00:06:48.999 "nvme_version": "1.3" 00:06:48.999 }, 00:06:48.999 "ns_data": { 00:06:48.999 "id": 1, 00:06:48.999 "can_share": true 00:06:48.999 } 00:06:48.999 } 00:06:48.999 ], 00:06:48.999 "mp_policy": "active_passive" 00:06:48.999 } 00:06:48.999 } 00:06:48.999 ] 00:06:48.999 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2765325 00:06:48.999 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:48.999 09:39:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:48.999 Running I/O for 10 seconds... 00:06:50.372 Latency(us) 00:06:50.372 [2024-11-20T08:39:13.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.372 Nvme0n1 : 1.00 22772.00 88.95 0.00 0.00 0.00 0.00 0.00 00:06:50.372 [2024-11-20T08:39:13.704Z] =================================================================================================================== 00:06:50.372 [2024-11-20T08:39:13.704Z] Total : 22772.00 88.95 0.00 0.00 0.00 0.00 0.00 00:06:50.372 00:06:50.938 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:06:51.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.199 Nvme0n1 : 2.00 22905.50 89.47 0.00 0.00 0.00 0.00 0.00 00:06:51.199 [2024-11-20T08:39:14.531Z] =================================================================================================================== 00:06:51.199 [2024-11-20T08:39:14.531Z] Total : 22905.50 89.47 0.00 0.00 0.00 0.00 0.00 00:06:51.199 00:06:51.199 true 00:06:51.199 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:06:51.199 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:51.456 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:51.456 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:51.456 09:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2765325 00:06:52.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.021 Nvme0n1 : 3.00 22920.00 89.53 0.00 0.00 0.00 0.00 0.00 00:06:52.021 [2024-11-20T08:39:15.353Z] =================================================================================================================== 00:06:52.021 [2024-11-20T08:39:15.353Z] Total : 22920.00 89.53 0.00 0.00 0.00 0.00 0.00 00:06:52.021 00:06:53.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.394 Nvme0n1 : 4.00 22985.25 89.79 0.00 0.00 0.00 0.00 0.00 00:06:53.394 [2024-11-20T08:39:16.726Z] =================================================================================================================== 00:06:53.394 [2024-11-20T08:39:16.726Z] Total : 22985.25 89.79 0.00 0.00 0.00 0.00 0.00 00:06:53.394 00:06:54.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.333 Nvme0n1 : 5.00 23050.80 90.04 0.00 0.00 0.00 0.00 0.00 00:06:54.333 [2024-11-20T08:39:17.665Z] =================================================================================================================== 00:06:54.333 [2024-11-20T08:39:17.666Z] Total : 23050.80 90.04 0.00 0.00 0.00 0.00 0.00 00:06:54.334 00:06:55.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.271 Nvme0n1 : 6.00 23084.67 90.17 0.00 0.00 0.00 0.00 0.00 00:06:55.271 [2024-11-20T08:39:18.603Z] =================================================================================================================== 00:06:55.271 [2024-11-20T08:39:18.603Z] Total : 23084.67 90.17 0.00 0.00 0.00 0.00 0.00 00:06:55.271 00:06:56.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.205 Nvme0n1 : 7.00 23108.00 90.27 0.00 0.00 0.00 0.00 0.00 00:06:56.205 [2024-11-20T08:39:19.537Z] =================================================================================================================== 00:06:56.205 [2024-11-20T08:39:19.537Z] Total : 23108.00 90.27 0.00 0.00 0.00 0.00 0.00 00:06:56.205 00:06:57.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.137 Nvme0n1 : 8.00 23104.62 90.25 0.00 0.00 0.00 0.00 0.00 00:06:57.137 [2024-11-20T08:39:20.469Z] =================================================================================================================== 00:06:57.137 [2024-11-20T08:39:20.469Z] Total : 23104.62 90.25 0.00 0.00 0.00 0.00 0.00 00:06:57.137 00:06:58.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.072 Nvme0n1 : 9.00 23100.33 90.24 0.00 0.00 0.00 0.00 0.00 00:06:58.072 [2024-11-20T08:39:21.404Z] =================================================================================================================== 00:06:58.072 [2024-11-20T08:39:21.404Z] Total : 23100.33 90.24 0.00 0.00 0.00 0.00 0.00 00:06:58.072 00:06:59.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.447 Nvme0n1 : 10.00 23118.30 90.31 0.00 0.00 0.00 0.00 0.00 00:06:59.447 [2024-11-20T08:39:22.779Z] =================================================================================================================== 00:06:59.447 [2024-11-20T08:39:22.779Z] Total : 23118.30 90.31 0.00 0.00 0.00 0.00 0.00 00:06:59.447 00:06:59.447 00:06:59.447 Latency(us) 00:06:59.447 [2024-11-20T08:39:22.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.447 Nvme0n1 : 10.00 23122.91 90.32 0.00 0.00 5532.67 2849.39 11112.63 00:06:59.447 [2024-11-20T08:39:22.779Z] =================================================================================================================== 00:06:59.447 [2024-11-20T08:39:22.779Z] Total : 23122.91 90.32 0.00 0.00 5532.67 2849.39 11112.63 00:06:59.447 { 00:06:59.447 "results": [ 00:06:59.447 { 00:06:59.447 "job": "Nvme0n1", 00:06:59.447 "core_mask": "0x2", 00:06:59.447 "workload": "randwrite", 00:06:59.447 "status": "finished", 00:06:59.447 "queue_depth": 128, 00:06:59.447 "io_size": 4096, 00:06:59.447 "runtime": 10.003541, 00:06:59.447 "iops": 23122.912176798196, 00:06:59.447 "mibps": 90.32387569061795, 00:06:59.447 "io_failed": 0, 00:06:59.447 "io_timeout": 0, 00:06:59.447 "avg_latency_us": 5532.673366769714, 00:06:59.447 "min_latency_us": 2849.391304347826, 00:06:59.447 "max_latency_us": 11112.626086956521 00:06:59.447 } 00:06:59.447 ], 00:06:59.447 "core_count": 1 00:06:59.447 } 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2765095 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2765095 ']' 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2765095 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2765095 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2765095' 00:06:59.447 killing process with pid 2765095 00:06:59.447 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2765095 00:06:59.448 Received shutdown signal, test time was about 10.000000 seconds 00:06:59.448 00:06:59.448 Latency(us) 00:06:59.448 [2024-11-20T08:39:22.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.448 [2024-11-20T08:39:22.780Z] =================================================================================================================== 00:06:59.448 [2024-11-20T08:39:22.780Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:59.448 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2765095 00:06:59.448 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:59.448 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:59.705 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:06:59.705 09:39:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:59.964 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:59.964 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:59.964 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:00.222 [2024-11-20 09:39:23.369705] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.222 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.223 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:00.223 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:07:00.481 request: 00:07:00.481 { 00:07:00.481 "uuid": "9b36b51f-2f40-4214-bf6f-ff6534f7783e", 00:07:00.481 "method": "bdev_lvol_get_lvstores", 00:07:00.481 "req_id": 1 00:07:00.481 } 00:07:00.481 Got JSON-RPC error response 00:07:00.481 response: 00:07:00.481 { 00:07:00.481 "code": -19, 00:07:00.481 "message": "No such device" 00:07:00.481 } 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:00.481 aio_bdev 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e6cb1c8c-06e7-4cce-a17e-40761d3c8474 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e6cb1c8c-06e7-4cce-a17e-40761d3c8474 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:00.481 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:00.740 09:39:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e6cb1c8c-06e7-4cce-a17e-40761d3c8474 -t 2000 00:07:00.998 [ 00:07:00.998 { 00:07:00.998 "name": "e6cb1c8c-06e7-4cce-a17e-40761d3c8474", 00:07:00.998 "aliases": [ 00:07:00.998 "lvs/lvol" 00:07:00.998 ], 00:07:00.998 "product_name": "Logical Volume", 00:07:00.998 "block_size": 4096, 00:07:00.998 "num_blocks": 38912, 00:07:00.998 "uuid": "e6cb1c8c-06e7-4cce-a17e-40761d3c8474", 00:07:00.998 "assigned_rate_limits": { 00:07:00.998 "rw_ios_per_sec": 0, 00:07:00.998 "rw_mbytes_per_sec": 0, 00:07:00.998 "r_mbytes_per_sec": 0, 00:07:00.998 "w_mbytes_per_sec": 0 00:07:00.998 }, 00:07:00.998 "claimed": false, 00:07:00.998 "zoned": false, 00:07:00.998 "supported_io_types": { 00:07:00.998 "read": true, 00:07:00.998 "write": true, 00:07:00.998 "unmap": true, 00:07:00.998 "flush": false, 00:07:00.998 "reset": true, 00:07:00.998 "nvme_admin": false, 00:07:00.998 "nvme_io": false, 00:07:00.999 "nvme_io_md": false, 00:07:00.999 "write_zeroes": true, 00:07:00.999 "zcopy": false, 00:07:00.999 "get_zone_info": false, 00:07:00.999 "zone_management": false, 00:07:00.999 "zone_append": false, 00:07:00.999 "compare": false, 00:07:00.999 "compare_and_write": false, 00:07:00.999 "abort": false, 00:07:00.999 "seek_hole": true, 00:07:00.999 "seek_data": true, 00:07:00.999 "copy": false, 00:07:00.999 "nvme_iov_md": false 00:07:00.999 }, 00:07:00.999 "driver_specific": { 00:07:00.999 "lvol": { 00:07:00.999 "lvol_store_uuid": "9b36b51f-2f40-4214-bf6f-ff6534f7783e", 00:07:00.999 "base_bdev": "aio_bdev", 00:07:00.999 "thin_provision": false, 00:07:00.999 "num_allocated_clusters": 38, 00:07:00.999 "snapshot": false, 00:07:00.999 "clone": false, 00:07:00.999 "esnap_clone": false 00:07:00.999 } 00:07:00.999 } 00:07:00.999 } 00:07:00.999 ] 00:07:00.999 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:00.999 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:07:00.999 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:01.257 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:01.257 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:07:01.257 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:01.257 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:01.257 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e6cb1c8c-06e7-4cce-a17e-40761d3c8474 00:07:01.516 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b36b51f-2f40-4214-bf6f-ff6534f7783e 00:07:01.774 09:39:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.033 00:07:02.033 real 0m15.698s 00:07:02.033 user 0m15.246s 00:07:02.033 sys 0m1.537s 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:02.033 ************************************ 00:07:02.033 END TEST lvs_grow_clean 00:07:02.033 ************************************ 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.033 ************************************ 00:07:02.033 START TEST lvs_grow_dirty 00:07:02.033 ************************************ 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.033 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:02.292 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:02.292 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:02.550 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:02.550 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:02.550 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:02.550 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:02.550 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:02.550 09:39:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9e4bb3ee-2352-49c4-adf4-451a73825710 lvol 150 00:07:02.808 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=55837d84-e672-46c8-b04b-2cbc3a96597f 00:07:02.808 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.808 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:03.066 [2024-11-20 09:39:26.211888] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:03.066 [2024-11-20 09:39:26.211942] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:03.066 true 00:07:03.066 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:03.066 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:03.324 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:03.324 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:03.324 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 55837d84-e672-46c8-b04b-2cbc3a96597f 00:07:03.583 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:03.841 [2024-11-20 09:39:26.962141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.842 09:39:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2767711 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2767711 /var/tmp/bdevperf.sock 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2767711 ']' 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:03.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.842 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:04.100 [2024-11-20 09:39:27.194432] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:07:04.100 [2024-11-20 09:39:27.194478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767711 ] 00:07:04.100 [2024-11-20 09:39:27.266430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.100 [2024-11-20 09:39:27.308996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.100 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.100 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:04.100 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:04.358 Nvme0n1 00:07:04.358 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:04.616 [ 00:07:04.616 { 00:07:04.616 "name": "Nvme0n1", 00:07:04.616 "aliases": [ 00:07:04.616 "55837d84-e672-46c8-b04b-2cbc3a96597f" 00:07:04.616 ], 00:07:04.616 "product_name": "NVMe disk", 00:07:04.616 "block_size": 4096, 00:07:04.616 "num_blocks": 38912, 00:07:04.616 "uuid": "55837d84-e672-46c8-b04b-2cbc3a96597f", 00:07:04.616 "numa_id": 1, 00:07:04.616 "assigned_rate_limits": { 00:07:04.616 "rw_ios_per_sec": 0, 00:07:04.616 "rw_mbytes_per_sec": 0, 00:07:04.616 "r_mbytes_per_sec": 0, 00:07:04.616 "w_mbytes_per_sec": 0 00:07:04.616 }, 00:07:04.616 "claimed": false, 00:07:04.616 "zoned": false, 00:07:04.616 "supported_io_types": { 00:07:04.616 "read": true, 00:07:04.616 "write": true, 00:07:04.616 "unmap": true, 00:07:04.616 "flush": true, 00:07:04.616 "reset": true, 00:07:04.616 "nvme_admin": true, 00:07:04.617 "nvme_io": true, 00:07:04.617 "nvme_io_md": false, 00:07:04.617 "write_zeroes": true, 00:07:04.617 "zcopy": false, 00:07:04.617 "get_zone_info": false, 00:07:04.617 "zone_management": false, 00:07:04.617 "zone_append": false, 00:07:04.617 "compare": true, 00:07:04.617 "compare_and_write": true, 00:07:04.617 "abort": true, 00:07:04.617 "seek_hole": false, 00:07:04.617 "seek_data": false, 00:07:04.617 "copy": true, 00:07:04.617 "nvme_iov_md": false 00:07:04.617 }, 00:07:04.617 "memory_domains": [ 00:07:04.617 { 00:07:04.617 "dma_device_id": "system", 00:07:04.617 "dma_device_type": 1 00:07:04.617 } 00:07:04.617 ], 00:07:04.617 "driver_specific": { 00:07:04.617 "nvme": [ 00:07:04.617 { 00:07:04.617 "trid": { 00:07:04.617 "trtype": "TCP", 00:07:04.617 "adrfam": "IPv4", 00:07:04.617 "traddr": "10.0.0.2", 00:07:04.617 "trsvcid": "4420", 00:07:04.617 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:04.617 }, 00:07:04.617 "ctrlr_data": { 00:07:04.617 "cntlid": 1, 00:07:04.617 "vendor_id": "0x8086", 00:07:04.617 "model_number": "SPDK bdev Controller", 00:07:04.617 "serial_number": "SPDK0", 00:07:04.617 "firmware_revision": "25.01", 00:07:04.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:04.617 "oacs": { 00:07:04.617 "security": 0, 00:07:04.617 "format": 0, 00:07:04.617 "firmware": 0, 00:07:04.617 "ns_manage": 0 00:07:04.617 }, 00:07:04.617 "multi_ctrlr": true, 00:07:04.617 "ana_reporting": false 00:07:04.617 }, 00:07:04.617 "vs": { 00:07:04.617 "nvme_version": "1.3" 00:07:04.617 }, 00:07:04.617 "ns_data": { 00:07:04.617 "id": 1, 00:07:04.617 "can_share": true 00:07:04.617 } 00:07:04.617 } 00:07:04.617 ], 00:07:04.617 "mp_policy": "active_passive" 00:07:04.617 } 00:07:04.617 } 00:07:04.617 ] 00:07:04.617 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2767928 00:07:04.617 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:04.617 09:39:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:04.874 Running I/O for 10 seconds... 00:07:05.807 Latency(us) 00:07:05.807 [2024-11-20T08:39:29.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.807 Nvme0n1 : 1.00 22746.00 88.85 0.00 0.00 0.00 0.00 0.00 00:07:05.807 [2024-11-20T08:39:29.139Z] =================================================================================================================== 00:07:05.807 [2024-11-20T08:39:29.139Z] Total : 22746.00 88.85 0.00 0.00 0.00 0.00 0.00 00:07:05.807 00:07:06.878 09:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:06.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.878 Nvme0n1 : 2.00 22850.50 89.26 0.00 0.00 0.00 0.00 0.00 00:07:06.878 [2024-11-20T08:39:30.210Z] =================================================================================================================== 00:07:06.878 [2024-11-20T08:39:30.210Z] Total : 22850.50 89.26 0.00 0.00 0.00 0.00 0.00 00:07:06.878 00:07:06.878 true 00:07:06.878 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:06.878 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:07.134 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:07.134 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:07.134 09:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2767928 00:07:07.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.699 Nvme0n1 : 3.00 22800.33 89.06 0.00 0.00 0.00 0.00 0.00 00:07:07.699 [2024-11-20T08:39:31.031Z] =================================================================================================================== 00:07:07.699 [2024-11-20T08:39:31.031Z] Total : 22800.33 89.06 0.00 0.00 0.00 0.00 0.00 00:07:07.699 00:07:09.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.072 Nvme0n1 : 4.00 22902.25 89.46 0.00 0.00 0.00 0.00 0.00 00:07:09.072 [2024-11-20T08:39:32.404Z] =================================================================================================================== 00:07:09.072 [2024-11-20T08:39:32.404Z] Total : 22902.25 89.46 0.00 0.00 0.00 0.00 0.00 00:07:09.072 00:07:10.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.006 Nvme0n1 : 5.00 22972.20 89.74 0.00 0.00 0.00 0.00 0.00 00:07:10.006 [2024-11-20T08:39:33.338Z] =================================================================================================================== 00:07:10.006 [2024-11-20T08:39:33.338Z] Total : 22972.20 89.74 0.00 0.00 0.00 0.00 0.00 00:07:10.006 00:07:10.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.940 Nvme0n1 : 6.00 23020.83 89.93 0.00 0.00 0.00 0.00 0.00 00:07:10.940 [2024-11-20T08:39:34.272Z] =================================================================================================================== 00:07:10.940 [2024-11-20T08:39:34.272Z] Total : 23020.83 89.93 0.00 0.00 0.00 0.00 0.00 00:07:10.940 00:07:11.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.875 Nvme0n1 : 7.00 23055.14 90.06 0.00 0.00 0.00 0.00 0.00 00:07:11.875 [2024-11-20T08:39:35.207Z] =================================================================================================================== 00:07:11.875 [2024-11-20T08:39:35.207Z] Total : 23055.14 90.06 0.00 0.00 0.00 0.00 0.00 00:07:11.875 00:07:12.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.807 Nvme0n1 : 8.00 23088.75 90.19 0.00 0.00 0.00 0.00 0.00 00:07:12.807 [2024-11-20T08:39:36.139Z] =================================================================================================================== 00:07:12.807 [2024-11-20T08:39:36.139Z] Total : 23088.75 90.19 0.00 0.00 0.00 0.00 0.00 00:07:12.807 00:07:13.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.740 Nvme0n1 : 9.00 23108.00 90.27 0.00 0.00 0.00 0.00 0.00 00:07:13.740 [2024-11-20T08:39:37.072Z] =================================================================================================================== 00:07:13.740 [2024-11-20T08:39:37.072Z] Total : 23108.00 90.27 0.00 0.00 0.00 0.00 0.00 00:07:13.740 00:07:15.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.112 Nvme0n1 : 10.00 23127.80 90.34 0.00 0.00 0.00 0.00 0.00 00:07:15.112 [2024-11-20T08:39:38.444Z] =================================================================================================================== 00:07:15.112 [2024-11-20T08:39:38.444Z] Total : 23127.80 90.34 0.00 0.00 0.00 0.00 0.00 00:07:15.112 00:07:15.112 00:07:15.112 Latency(us) 00:07:15.112 [2024-11-20T08:39:38.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.112 Nvme0n1 : 10.00 23133.40 90.36 0.00 0.00 5530.23 3191.32 11796.48 00:07:15.112 [2024-11-20T08:39:38.444Z] =================================================================================================================== 00:07:15.112 [2024-11-20T08:39:38.444Z] Total : 23133.40 90.36 0.00 0.00 5530.23 3191.32 11796.48 00:07:15.112 { 00:07:15.112 "results": [ 00:07:15.112 { 00:07:15.112 "job": "Nvme0n1", 00:07:15.112 "core_mask": "0x2", 00:07:15.112 "workload": "randwrite", 00:07:15.112 "status": "finished", 00:07:15.112 "queue_depth": 128, 00:07:15.112 "io_size": 4096, 00:07:15.112 "runtime": 10.003111, 00:07:15.112 "iops": 23133.40319826502, 00:07:15.112 "mibps": 90.36485624322273, 00:07:15.112 "io_failed": 0, 00:07:15.112 "io_timeout": 0, 00:07:15.112 "avg_latency_us": 5530.231107186352, 00:07:15.112 "min_latency_us": 3191.318260869565, 00:07:15.112 "max_latency_us": 11796.48 00:07:15.112 } 00:07:15.112 ], 00:07:15.112 "core_count": 1 00:07:15.112 } 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2767711 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2767711 ']' 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2767711 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2767711 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2767711' 00:07:15.112 killing process with pid 2767711 00:07:15.112 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2767711 00:07:15.112 Received shutdown signal, test time was about 10.000000 seconds 00:07:15.112 00:07:15.112 Latency(us) 00:07:15.112 [2024-11-20T08:39:38.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.112 [2024-11-20T08:39:38.445Z] =================================================================================================================== 00:07:15.113 [2024-11-20T08:39:38.445Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:15.113 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2767711 00:07:15.113 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:15.370 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.370 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:15.370 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2764410 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2764410 00:07:15.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2764410 Killed "${NVMF_APP[@]}" "$@" 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2769785 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2769785 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2769785 ']' 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.629 09:39:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.888 [2024-11-20 09:39:38.976270] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:07:15.888 [2024-11-20 09:39:38.976317] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.888 [2024-11-20 09:39:39.056701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.888 [2024-11-20 09:39:39.097460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.889 [2024-11-20 09:39:39.097497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.889 [2024-11-20 09:39:39.097504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.889 [2024-11-20 09:39:39.097511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.889 [2024-11-20 09:39:39.097516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.889 [2024-11-20 09:39:39.098085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.889 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.889 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:15.889 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.889 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.889 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:16.147 [2024-11-20 09:39:39.394921] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:16.147 [2024-11-20 09:39:39.395004] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:16.147 [2024-11-20 09:39:39.395031] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 55837d84-e672-46c8-b04b-2cbc3a96597f 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=55837d84-e672-46c8-b04b-2cbc3a96597f 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:16.147 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:16.405 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 55837d84-e672-46c8-b04b-2cbc3a96597f -t 2000 00:07:16.663 [ 00:07:16.663 { 00:07:16.663 "name": "55837d84-e672-46c8-b04b-2cbc3a96597f", 00:07:16.663 "aliases": [ 00:07:16.663 "lvs/lvol" 00:07:16.663 ], 00:07:16.663 "product_name": "Logical Volume", 00:07:16.663 "block_size": 4096, 00:07:16.664 "num_blocks": 38912, 00:07:16.664 "uuid": "55837d84-e672-46c8-b04b-2cbc3a96597f", 00:07:16.664 "assigned_rate_limits": { 00:07:16.664 "rw_ios_per_sec": 0, 00:07:16.664 "rw_mbytes_per_sec": 0, 00:07:16.664 "r_mbytes_per_sec": 0, 00:07:16.664 "w_mbytes_per_sec": 0 00:07:16.664 }, 00:07:16.664 "claimed": false, 00:07:16.664 "zoned": false, 00:07:16.664 "supported_io_types": { 00:07:16.664 "read": true, 00:07:16.664 "write": true, 00:07:16.664 "unmap": true, 00:07:16.664 "flush": false, 00:07:16.664 "reset": true, 00:07:16.664 "nvme_admin": false, 00:07:16.664 "nvme_io": false, 00:07:16.664 "nvme_io_md": false, 00:07:16.664 "write_zeroes": true, 00:07:16.664 "zcopy": false, 00:07:16.664 "get_zone_info": false, 00:07:16.664 "zone_management": false, 00:07:16.664 "zone_append": false, 00:07:16.664 "compare": false, 00:07:16.664 "compare_and_write": false, 00:07:16.664 "abort": false, 00:07:16.664 "seek_hole": true, 00:07:16.664 "seek_data": true, 00:07:16.664 "copy": false, 00:07:16.664 "nvme_iov_md": false 00:07:16.664 }, 00:07:16.664 "driver_specific": { 00:07:16.664 "lvol": { 00:07:16.664 "lvol_store_uuid": "9e4bb3ee-2352-49c4-adf4-451a73825710", 00:07:16.664 "base_bdev": "aio_bdev", 00:07:16.664 "thin_provision": false, 00:07:16.664 "num_allocated_clusters": 38, 00:07:16.664 "snapshot": false, 00:07:16.664 "clone": false, 00:07:16.664 "esnap_clone": false 00:07:16.664 } 00:07:16.664 } 00:07:16.664 } 00:07:16.664 ] 00:07:16.664 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:16.664 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:16.664 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:16.921 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:16.921 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:16.921 09:39:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:16.921 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:16.921 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:17.179 [2024-11-20 09:39:40.363919] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:17.179 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:17.436 request: 00:07:17.436 { 00:07:17.436 "uuid": "9e4bb3ee-2352-49c4-adf4-451a73825710", 00:07:17.436 "method": "bdev_lvol_get_lvstores", 00:07:17.436 "req_id": 1 00:07:17.436 } 00:07:17.436 Got JSON-RPC error response 00:07:17.436 response: 00:07:17.436 { 00:07:17.437 "code": -19, 00:07:17.437 "message": "No such device" 00:07:17.437 } 00:07:17.437 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:17.437 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.437 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.437 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.437 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:17.694 aio_bdev 00:07:17.694 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 55837d84-e672-46c8-b04b-2cbc3a96597f 00:07:17.694 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=55837d84-e672-46c8-b04b-2cbc3a96597f 00:07:17.694 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:17.694 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:17.694 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:17.694 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:17.694 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:17.694 09:39:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 55837d84-e672-46c8-b04b-2cbc3a96597f -t 2000 00:07:17.951 [ 00:07:17.951 { 00:07:17.951 "name": "55837d84-e672-46c8-b04b-2cbc3a96597f", 00:07:17.951 "aliases": [ 00:07:17.951 "lvs/lvol" 00:07:17.951 ], 00:07:17.951 "product_name": "Logical Volume", 00:07:17.952 "block_size": 4096, 00:07:17.952 "num_blocks": 38912, 00:07:17.952 "uuid": "55837d84-e672-46c8-b04b-2cbc3a96597f", 00:07:17.952 "assigned_rate_limits": { 00:07:17.952 "rw_ios_per_sec": 0, 00:07:17.952 "rw_mbytes_per_sec": 0, 00:07:17.952 "r_mbytes_per_sec": 0, 00:07:17.952 "w_mbytes_per_sec": 0 00:07:17.952 }, 00:07:17.952 "claimed": false, 00:07:17.952 "zoned": false, 00:07:17.952 "supported_io_types": { 00:07:17.952 "read": true, 00:07:17.952 "write": true, 00:07:17.952 "unmap": true, 00:07:17.952 "flush": false, 00:07:17.952 "reset": true, 00:07:17.952 "nvme_admin": false, 00:07:17.952 "nvme_io": false, 00:07:17.952 "nvme_io_md": false, 00:07:17.952 "write_zeroes": true, 00:07:17.952 "zcopy": false, 00:07:17.952 "get_zone_info": false, 00:07:17.952 "zone_management": false, 00:07:17.952 "zone_append": false, 00:07:17.952 "compare": false, 00:07:17.952 "compare_and_write": false, 00:07:17.952 "abort": false, 00:07:17.952 "seek_hole": true, 00:07:17.952 "seek_data": true, 00:07:17.952 "copy": false, 00:07:17.952 "nvme_iov_md": false 00:07:17.952 }, 00:07:17.952 "driver_specific": { 00:07:17.952 "lvol": { 00:07:17.952 "lvol_store_uuid": "9e4bb3ee-2352-49c4-adf4-451a73825710", 00:07:17.952 "base_bdev": "aio_bdev", 00:07:17.952 "thin_provision": false, 00:07:17.952 "num_allocated_clusters": 38, 00:07:17.952 "snapshot": false, 00:07:17.952 "clone": false, 00:07:17.952 "esnap_clone": false 00:07:17.952 } 00:07:17.952 } 00:07:17.952 } 00:07:17.952 ] 00:07:17.952 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:17.952 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:17.952 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:18.210 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:18.210 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:18.210 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:18.468 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:18.468 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 55837d84-e672-46c8-b04b-2cbc3a96597f 00:07:18.468 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e4bb3ee-2352-49c4-adf4-451a73825710 00:07:18.726 09:39:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:19.048 00:07:19.048 real 0m16.936s 00:07:19.048 user 0m43.789s 00:07:19.048 sys 0m3.803s 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:19.048 ************************************ 00:07:19.048 END TEST lvs_grow_dirty 00:07:19.048 ************************************ 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:19.048 nvmf_trace.0 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.048 rmmod nvme_tcp 00:07:19.048 rmmod nvme_fabrics 00:07:19.048 rmmod nvme_keyring 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2769785 ']' 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2769785 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2769785 ']' 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2769785 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.048 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2769785 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2769785' 00:07:19.307 killing process with pid 2769785 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2769785 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2769785 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.307 09:39:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.842 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:21.842 00:07:21.842 real 0m41.980s 00:07:21.842 user 1m4.792s 00:07:21.842 sys 0m10.267s 00:07:21.842 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.842 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.843 ************************************ 00:07:21.843 END TEST nvmf_lvs_grow 00:07:21.843 ************************************ 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.843 ************************************ 00:07:21.843 START TEST nvmf_bdev_io_wait 00:07:21.843 ************************************ 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:21.843 * Looking for test storage... 00:07:21.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # lcov --version 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:07:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.843 --rc genhtml_branch_coverage=1 00:07:21.843 --rc genhtml_function_coverage=1 00:07:21.843 --rc genhtml_legend=1 00:07:21.843 --rc geninfo_all_blocks=1 00:07:21.843 --rc geninfo_unexecuted_blocks=1 00:07:21.843 00:07:21.843 ' 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:07:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.843 --rc genhtml_branch_coverage=1 00:07:21.843 --rc genhtml_function_coverage=1 00:07:21.843 --rc genhtml_legend=1 00:07:21.843 --rc geninfo_all_blocks=1 00:07:21.843 --rc geninfo_unexecuted_blocks=1 00:07:21.843 00:07:21.843 ' 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:07:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.843 --rc genhtml_branch_coverage=1 00:07:21.843 --rc genhtml_function_coverage=1 00:07:21.843 --rc genhtml_legend=1 00:07:21.843 --rc geninfo_all_blocks=1 00:07:21.843 --rc geninfo_unexecuted_blocks=1 00:07:21.843 00:07:21.843 ' 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:07:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.843 --rc genhtml_branch_coverage=1 00:07:21.843 --rc genhtml_function_coverage=1 00:07:21.843 --rc genhtml_legend=1 00:07:21.843 --rc geninfo_all_blocks=1 00:07:21.843 --rc geninfo_unexecuted_blocks=1 00:07:21.843 00:07:21.843 ' 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:21.843 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.844 09:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:28.416 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:28.416 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:28.416 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:28.417 Found net devices under 0000:86:00.0: cvl_0_0 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:28.417 Found net devices under 0000:86:00.1: cvl_0_1 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:28.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:07:28.417 00:07:28.417 --- 10.0.0.2 ping statistics --- 00:07:28.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.417 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:07:28.417 00:07:28.417 --- 10.0.0.1 ping statistics --- 00:07:28.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.417 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2773861 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2773861 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2773861 ']' 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.417 09:39:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 [2024-11-20 09:39:50.955618] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:07:28.417 [2024-11-20 09:39:50.955665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.417 [2024-11-20 09:39:51.035718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.417 [2024-11-20 09:39:51.079662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.417 [2024-11-20 09:39:51.079702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.417 [2024-11-20 09:39:51.079710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.417 [2024-11-20 09:39:51.079715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.417 [2024-11-20 09:39:51.079720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.417 [2024-11-20 09:39:51.081333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.417 [2024-11-20 09:39:51.081453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.417 [2024-11-20 09:39:51.081564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.417 [2024-11-20 09:39:51.081564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.417 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.418 [2024-11-20 09:39:51.210097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.418 Malloc0 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.418 [2024-11-20 09:39:51.261695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2774090 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2774092 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:28.418 { 00:07:28.418 "params": { 00:07:28.418 "name": "Nvme$subsystem", 00:07:28.418 "trtype": "$TEST_TRANSPORT", 00:07:28.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.418 "adrfam": "ipv4", 00:07:28.418 "trsvcid": "$NVMF_PORT", 00:07:28.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.418 "hdgst": ${hdgst:-false}, 00:07:28.418 "ddgst": ${ddgst:-false} 00:07:28.418 }, 00:07:28.418 "method": "bdev_nvme_attach_controller" 00:07:28.418 } 00:07:28.418 EOF 00:07:28.418 )") 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2774094 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:28.418 { 00:07:28.418 "params": { 00:07:28.418 "name": "Nvme$subsystem", 00:07:28.418 "trtype": "$TEST_TRANSPORT", 00:07:28.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.418 "adrfam": "ipv4", 00:07:28.418 "trsvcid": "$NVMF_PORT", 00:07:28.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.418 "hdgst": ${hdgst:-false}, 00:07:28.418 "ddgst": ${ddgst:-false} 00:07:28.418 }, 00:07:28.418 "method": "bdev_nvme_attach_controller" 00:07:28.418 } 00:07:28.418 EOF 00:07:28.418 )") 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2774097 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:28.418 { 00:07:28.418 "params": { 00:07:28.418 "name": "Nvme$subsystem", 00:07:28.418 "trtype": "$TEST_TRANSPORT", 00:07:28.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.418 "adrfam": "ipv4", 00:07:28.418 "trsvcid": "$NVMF_PORT", 00:07:28.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.418 "hdgst": ${hdgst:-false}, 00:07:28.418 "ddgst": ${ddgst:-false} 00:07:28.418 }, 00:07:28.418 "method": "bdev_nvme_attach_controller" 00:07:28.418 } 00:07:28.418 EOF 00:07:28.418 )") 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:28.418 { 00:07:28.418 "params": { 00:07:28.418 "name": "Nvme$subsystem", 00:07:28.418 "trtype": "$TEST_TRANSPORT", 00:07:28.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.418 "adrfam": "ipv4", 00:07:28.418 "trsvcid": "$NVMF_PORT", 00:07:28.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.418 "hdgst": ${hdgst:-false}, 00:07:28.418 "ddgst": ${ddgst:-false} 00:07:28.418 }, 00:07:28.418 "method": "bdev_nvme_attach_controller" 00:07:28.418 } 00:07:28.418 EOF 00:07:28.418 )") 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2774090 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:28.418 "params": { 00:07:28.418 "name": "Nvme1", 00:07:28.418 "trtype": "tcp", 00:07:28.418 "traddr": "10.0.0.2", 00:07:28.418 "adrfam": "ipv4", 00:07:28.418 "trsvcid": "4420", 00:07:28.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:28.418 "hdgst": false, 00:07:28.418 "ddgst": false 00:07:28.418 }, 00:07:28.418 "method": "bdev_nvme_attach_controller" 00:07:28.418 }' 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:28.418 "params": { 00:07:28.418 "name": "Nvme1", 00:07:28.418 "trtype": "tcp", 00:07:28.418 "traddr": "10.0.0.2", 00:07:28.418 "adrfam": "ipv4", 00:07:28.418 "trsvcid": "4420", 00:07:28.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:28.418 "hdgst": false, 00:07:28.418 "ddgst": false 00:07:28.418 }, 00:07:28.418 "method": "bdev_nvme_attach_controller" 00:07:28.418 }' 00:07:28.418 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:28.419 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:28.419 "params": { 00:07:28.419 "name": "Nvme1", 00:07:28.419 "trtype": "tcp", 00:07:28.419 "traddr": "10.0.0.2", 00:07:28.419 "adrfam": "ipv4", 00:07:28.419 "trsvcid": "4420", 00:07:28.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:28.419 "hdgst": false, 00:07:28.419 "ddgst": false 00:07:28.419 }, 00:07:28.419 "method": "bdev_nvme_attach_controller" 00:07:28.419 }' 00:07:28.419 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:28.419 09:39:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:28.419 "params": { 00:07:28.419 "name": "Nvme1", 00:07:28.419 "trtype": "tcp", 00:07:28.419 "traddr": "10.0.0.2", 00:07:28.419 "adrfam": "ipv4", 00:07:28.419 "trsvcid": "4420", 00:07:28.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:28.419 "hdgst": false, 00:07:28.419 "ddgst": false 00:07:28.419 }, 00:07:28.419 "method": "bdev_nvme_attach_controller" 00:07:28.419 }' 00:07:28.419 [2024-11-20 09:39:51.311544] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:07:28.419 [2024-11-20 09:39:51.311594] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:28.419 [2024-11-20 09:39:51.313994] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:07:28.419 [2024-11-20 09:39:51.314035] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:28.419 [2024-11-20 09:39:51.314920] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:07:28.419 [2024-11-20 09:39:51.314963] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:28.419 [2024-11-20 09:39:51.318271] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:07:28.419 [2024-11-20 09:39:51.318317] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:28.419 [2024-11-20 09:39:51.493300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.419 [2024-11-20 09:39:51.536257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:28.419 [2024-11-20 09:39:51.593931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.419 [2024-11-20 09:39:51.636884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:28.419 [2024-11-20 09:39:51.684971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.419 [2024-11-20 09:39:51.728199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.419 [2024-11-20 09:39:51.738424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:28.676 [2024-11-20 09:39:51.771104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:28.676 Running I/O for 1 seconds... 00:07:28.676 Running I/O for 1 seconds... 00:07:28.676 Running I/O for 1 seconds... 00:07:28.676 Running I/O for 1 seconds... 00:07:29.608 7468.00 IOPS, 29.17 MiB/s 00:07:29.608 Latency(us) 00:07:29.608 [2024-11-20T08:39:52.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.608 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:29.608 Nvme1n1 : 1.02 7474.96 29.20 0.00 0.00 16952.15 6639.08 26442.35 00:07:29.608 [2024-11-20T08:39:52.940Z] =================================================================================================================== 00:07:29.608 [2024-11-20T08:39:52.940Z] Total : 7474.96 29.20 0.00 0.00 16952.15 6639.08 26442.35 00:07:29.608 10897.00 IOPS, 42.57 MiB/s 00:07:29.608 Latency(us) 00:07:29.608 [2024-11-20T08:39:52.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.608 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:29.608 Nvme1n1 : 1.01 10956.43 42.80 0.00 0.00 11639.61 5556.31 22681.15 00:07:29.608 [2024-11-20T08:39:52.940Z] =================================================================================================================== 00:07:29.608 [2024-11-20T08:39:52.940Z] Total : 10956.43 42.80 0.00 0.00 11639.61 5556.31 22681.15 00:07:29.608 7590.00 IOPS, 29.65 MiB/s 00:07:29.608 Latency(us) 00:07:29.608 [2024-11-20T08:39:52.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.608 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:29.608 Nvme1n1 : 1.01 7726.35 30.18 0.00 0.00 16530.46 2977.61 38979.67 00:07:29.608 [2024-11-20T08:39:52.940Z] =================================================================================================================== 00:07:29.608 [2024-11-20T08:39:52.940Z] Total : 7726.35 30.18 0.00 0.00 16530.46 2977.61 38979.67 00:07:29.865 246000.00 IOPS, 960.94 MiB/s 00:07:29.866 Latency(us) 00:07:29.866 [2024-11-20T08:39:53.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.866 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:29.866 Nvme1n1 : 1.00 245616.73 959.44 0.00 0.00 517.83 235.07 1545.79 00:07:29.866 [2024-11-20T08:39:53.198Z] =================================================================================================================== 00:07:29.866 [2024-11-20T08:39:53.198Z] Total : 245616.73 959.44 0.00 0.00 517.83 235.07 1545.79 00:07:29.866 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2774092 00:07:29.866 09:39:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2774094 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2774097 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.866 rmmod nvme_tcp 00:07:29.866 rmmod nvme_fabrics 00:07:29.866 rmmod nvme_keyring 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2773861 ']' 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2773861 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2773861 ']' 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2773861 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2773861 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2773861' 00:07:29.866 killing process with pid 2773861 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2773861 00:07:29.866 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2773861 00:07:30.124 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:30.124 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:30.124 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:30.124 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:30.124 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:30.124 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:30.124 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:30.124 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.124 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:30.125 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.125 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.125 09:39:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:32.661 00:07:32.661 real 0m10.710s 00:07:32.661 user 0m15.649s 00:07:32.661 sys 0m6.155s 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.661 ************************************ 00:07:32.661 END TEST nvmf_bdev_io_wait 00:07:32.661 ************************************ 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:32.661 ************************************ 00:07:32.661 START TEST nvmf_queue_depth 00:07:32.661 ************************************ 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:32.661 * Looking for test storage... 00:07:32.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # lcov --version 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:07:32.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.661 --rc genhtml_branch_coverage=1 00:07:32.661 --rc genhtml_function_coverage=1 00:07:32.661 --rc genhtml_legend=1 00:07:32.661 --rc geninfo_all_blocks=1 00:07:32.661 --rc geninfo_unexecuted_blocks=1 00:07:32.661 00:07:32.661 ' 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:07:32.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.661 --rc genhtml_branch_coverage=1 00:07:32.661 --rc genhtml_function_coverage=1 00:07:32.661 --rc genhtml_legend=1 00:07:32.661 --rc geninfo_all_blocks=1 00:07:32.661 --rc geninfo_unexecuted_blocks=1 00:07:32.661 00:07:32.661 ' 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:07:32.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.661 --rc genhtml_branch_coverage=1 00:07:32.661 --rc genhtml_function_coverage=1 00:07:32.661 --rc genhtml_legend=1 00:07:32.661 --rc geninfo_all_blocks=1 00:07:32.661 --rc geninfo_unexecuted_blocks=1 00:07:32.661 00:07:32.661 ' 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:07:32.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.661 --rc genhtml_branch_coverage=1 00:07:32.661 --rc genhtml_function_coverage=1 00:07:32.661 --rc genhtml_legend=1 00:07:32.661 --rc geninfo_all_blocks=1 00:07:32.661 --rc geninfo_unexecuted_blocks=1 00:07:32.661 00:07:32.661 ' 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.661 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:32.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:32.662 09:39:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:39.230 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:39.230 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.230 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:39.230 Found net devices under 0000:86:00.0: cvl_0_0 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:39.231 Found net devices under 0000:86:00.1: cvl_0_1 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:07:39.231 00:07:39.231 --- 10.0.0.2 ping statistics --- 00:07:39.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.231 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:07:39.231 00:07:39.231 --- 10.0.0.1 ping statistics --- 00:07:39.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.231 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2777892 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2777892 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2777892 ']' 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 [2024-11-20 09:40:01.747957] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:07:39.231 [2024-11-20 09:40:01.748011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.231 [2024-11-20 09:40:01.827992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.231 [2024-11-20 09:40:01.869810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.231 [2024-11-20 09:40:01.869849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.231 [2024-11-20 09:40:01.869856] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.231 [2024-11-20 09:40:01.869862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.231 [2024-11-20 09:40:01.869867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.231 [2024-11-20 09:40:01.870459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.231 09:40:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 [2024-11-20 09:40:02.005893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.231 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.231 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 Malloc0 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 [2024-11-20 09:40:02.056188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2777912 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2777912 /var/tmp/bdevperf.sock 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2777912 ']' 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:39.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 [2024-11-20 09:40:02.106665] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:07:39.232 [2024-11-20 09:40:02.106705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777912 ] 00:07:39.232 [2024-11-20 09:40:02.181372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.232 [2024-11-20 09:40:02.224045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:39.232 NVMe0n1 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.232 09:40:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:39.232 Running I/O for 10 seconds... 00:07:41.556 11415.00 IOPS, 44.59 MiB/s [2024-11-20T08:40:05.834Z] 11774.00 IOPS, 45.99 MiB/s [2024-11-20T08:40:06.770Z] 11921.00 IOPS, 46.57 MiB/s [2024-11-20T08:40:07.705Z] 11897.75 IOPS, 46.48 MiB/s [2024-11-20T08:40:08.641Z] 11912.80 IOPS, 46.53 MiB/s [2024-11-20T08:40:09.576Z] 11994.83 IOPS, 46.85 MiB/s [2024-11-20T08:40:10.952Z] 12090.29 IOPS, 47.23 MiB/s [2024-11-20T08:40:11.888Z] 12084.62 IOPS, 47.21 MiB/s [2024-11-20T08:40:12.825Z] 12058.00 IOPS, 47.10 MiB/s [2024-11-20T08:40:12.825Z] 12074.00 IOPS, 47.16 MiB/s 00:07:49.493 Latency(us) 00:07:49.493 [2024-11-20T08:40:12.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.493 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:49.493 Verification LBA range: start 0x0 length 0x4000 00:07:49.493 NVMe0n1 : 10.06 12098.34 47.26 0.00 0.00 84367.40 19489.84 55392.17 00:07:49.493 [2024-11-20T08:40:12.825Z] =================================================================================================================== 00:07:49.493 [2024-11-20T08:40:12.825Z] Total : 12098.34 47.26 0.00 0.00 84367.40 19489.84 55392.17 00:07:49.493 { 00:07:49.493 "results": [ 00:07:49.493 { 00:07:49.493 "job": "NVMe0n1", 00:07:49.493 "core_mask": "0x1", 00:07:49.493 "workload": "verify", 00:07:49.493 "status": "finished", 00:07:49.493 "verify_range": { 00:07:49.493 "start": 0, 00:07:49.493 "length": 16384 00:07:49.493 }, 00:07:49.493 "queue_depth": 1024, 00:07:49.493 "io_size": 4096, 00:07:49.493 "runtime": 10.062541, 00:07:49.493 "iops": 12098.335798085196, 00:07:49.493 "mibps": 47.2591242112703, 00:07:49.493 "io_failed": 0, 00:07:49.493 "io_timeout": 0, 00:07:49.493 "avg_latency_us": 84367.40103284977, 00:07:49.493 "min_latency_us": 19489.83652173913, 00:07:49.493 "max_latency_us": 55392.16695652174 00:07:49.493 } 00:07:49.493 ], 00:07:49.493 "core_count": 1 00:07:49.493 } 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2777912 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2777912 ']' 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2777912 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2777912 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2777912' 00:07:49.493 killing process with pid 2777912 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2777912 00:07:49.493 Received shutdown signal, test time was about 10.000000 seconds 00:07:49.493 00:07:49.493 Latency(us) 00:07:49.493 [2024-11-20T08:40:12.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.493 [2024-11-20T08:40:12.825Z] =================================================================================================================== 00:07:49.493 [2024-11-20T08:40:12.825Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:49.493 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2777912 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:49.752 rmmod nvme_tcp 00:07:49.752 rmmod nvme_fabrics 00:07:49.752 rmmod nvme_keyring 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2777892 ']' 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2777892 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2777892 ']' 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2777892 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2777892 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2777892' 00:07:49.752 killing process with pid 2777892 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2777892 00:07:49.752 09:40:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2777892 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.010 09:40:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.918 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.918 00:07:51.918 real 0m19.727s 00:07:51.918 user 0m23.071s 00:07:51.918 sys 0m6.067s 00:07:51.918 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.918 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:51.918 ************************************ 00:07:51.918 END TEST nvmf_queue_depth 00:07:51.918 ************************************ 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.178 ************************************ 00:07:52.178 START TEST nvmf_target_multipath 00:07:52.178 ************************************ 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:52.178 * Looking for test storage... 00:07:52.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # lcov --version 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.178 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:07:52.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.178 --rc genhtml_branch_coverage=1 00:07:52.178 --rc genhtml_function_coverage=1 00:07:52.178 --rc genhtml_legend=1 00:07:52.178 --rc geninfo_all_blocks=1 00:07:52.179 --rc geninfo_unexecuted_blocks=1 00:07:52.179 00:07:52.179 ' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:07:52.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.179 --rc genhtml_branch_coverage=1 00:07:52.179 --rc genhtml_function_coverage=1 00:07:52.179 --rc genhtml_legend=1 00:07:52.179 --rc geninfo_all_blocks=1 00:07:52.179 --rc geninfo_unexecuted_blocks=1 00:07:52.179 00:07:52.179 ' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:07:52.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.179 --rc genhtml_branch_coverage=1 00:07:52.179 --rc genhtml_function_coverage=1 00:07:52.179 --rc genhtml_legend=1 00:07:52.179 --rc geninfo_all_blocks=1 00:07:52.179 --rc geninfo_unexecuted_blocks=1 00:07:52.179 00:07:52.179 ' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:07:52.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.179 --rc genhtml_branch_coverage=1 00:07:52.179 --rc genhtml_function_coverage=1 00:07:52.179 --rc genhtml_legend=1 00:07:52.179 --rc geninfo_all_blocks=1 00:07:52.179 --rc geninfo_unexecuted_blocks=1 00:07:52.179 00:07:52.179 ' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:52.179 09:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:58.748 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:58.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:58.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:58.749 Found net devices under 0000:86:00.0: cvl_0_0 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:58.749 Found net devices under 0000:86:00.1: cvl_0_1 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:58.749 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:58.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:07:58.749 00:07:58.749 --- 10.0.0.2 ping statistics --- 00:07:58.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.749 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:07:58.750 00:07:58.750 --- 10.0.0.1 ping statistics --- 00:07:58.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.750 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:58.750 only one NIC for nvmf test 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.750 rmmod nvme_tcp 00:07:58.750 rmmod nvme_fabrics 00:07:58.750 rmmod nvme_keyring 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.750 09:40:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.654 00:08:00.654 real 0m8.425s 00:08:00.654 user 0m1.864s 00:08:00.654 sys 0m4.579s 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:00.654 ************************************ 00:08:00.654 END TEST nvmf_target_multipath 00:08:00.654 ************************************ 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.654 ************************************ 00:08:00.654 START TEST nvmf_zcopy 00:08:00.654 ************************************ 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:00.654 * Looking for test storage... 00:08:00.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # lcov --version 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:08:00.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.654 --rc genhtml_branch_coverage=1 00:08:00.654 --rc genhtml_function_coverage=1 00:08:00.654 --rc genhtml_legend=1 00:08:00.654 --rc geninfo_all_blocks=1 00:08:00.654 --rc geninfo_unexecuted_blocks=1 00:08:00.654 00:08:00.654 ' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:08:00.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.654 --rc genhtml_branch_coverage=1 00:08:00.654 --rc genhtml_function_coverage=1 00:08:00.654 --rc genhtml_legend=1 00:08:00.654 --rc geninfo_all_blocks=1 00:08:00.654 --rc geninfo_unexecuted_blocks=1 00:08:00.654 00:08:00.654 ' 00:08:00.654 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:08:00.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.654 --rc genhtml_branch_coverage=1 00:08:00.654 --rc genhtml_function_coverage=1 00:08:00.654 --rc genhtml_legend=1 00:08:00.654 --rc geninfo_all_blocks=1 00:08:00.654 --rc geninfo_unexecuted_blocks=1 00:08:00.654 00:08:00.654 ' 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:08:00.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.655 --rc genhtml_branch_coverage=1 00:08:00.655 --rc genhtml_function_coverage=1 00:08:00.655 --rc genhtml_legend=1 00:08:00.655 --rc geninfo_all_blocks=1 00:08:00.655 --rc geninfo_unexecuted_blocks=1 00:08:00.655 00:08:00.655 ' 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.655 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.915 09:40:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.915 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:00.915 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:00.915 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:00.915 09:40:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:07.634 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:07.634 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.634 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:07.635 Found net devices under 0000:86:00.0: cvl_0_0 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:07.635 Found net devices under 0000:86:00.1: cvl_0_1 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:07.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:08:07.635 00:08:07.635 --- 10.0.0.2 ping statistics --- 00:08:07.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.635 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:08:07.635 00:08:07.635 --- 10.0.0.1 ping statistics --- 00:08:07.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.635 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2786815 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2786815 00:08:07.635 09:40:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2786815 ']' 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.635 [2024-11-20 09:40:30.051969] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:08:07.635 [2024-11-20 09:40:30.052020] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.635 [2024-11-20 09:40:30.131211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.635 [2024-11-20 09:40:30.170696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.635 [2024-11-20 09:40:30.170731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.635 [2024-11-20 09:40:30.170739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.635 [2024-11-20 09:40:30.170745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.635 [2024-11-20 09:40:30.170750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.635 [2024-11-20 09:40:30.171308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.635 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.636 [2024-11-20 09:40:30.318945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.636 [2024-11-20 09:40:30.339181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.636 malloc0 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.636 { 00:08:07.636 "params": { 00:08:07.636 "name": "Nvme$subsystem", 00:08:07.636 "trtype": "$TEST_TRANSPORT", 00:08:07.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.636 "adrfam": "ipv4", 00:08:07.636 "trsvcid": "$NVMF_PORT", 00:08:07.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.636 "hdgst": ${hdgst:-false}, 00:08:07.636 "ddgst": ${ddgst:-false} 00:08:07.636 }, 00:08:07.636 "method": "bdev_nvme_attach_controller" 00:08:07.636 } 00:08:07.636 EOF 00:08:07.636 )") 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:07.636 09:40:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.636 "params": { 00:08:07.636 "name": "Nvme1", 00:08:07.636 "trtype": "tcp", 00:08:07.636 "traddr": "10.0.0.2", 00:08:07.636 "adrfam": "ipv4", 00:08:07.636 "trsvcid": "4420", 00:08:07.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.636 "hdgst": false, 00:08:07.636 "ddgst": false 00:08:07.636 }, 00:08:07.636 "method": "bdev_nvme_attach_controller" 00:08:07.636 }' 00:08:07.636 [2024-11-20 09:40:30.422928] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:08:07.636 [2024-11-20 09:40:30.422978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786842 ] 00:08:07.636 [2024-11-20 09:40:30.498609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.636 [2024-11-20 09:40:30.540529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.636 Running I/O for 10 seconds... 00:08:09.505 8421.00 IOPS, 65.79 MiB/s [2024-11-20T08:40:33.771Z] 8514.00 IOPS, 66.52 MiB/s [2024-11-20T08:40:35.146Z] 8545.00 IOPS, 66.76 MiB/s [2024-11-20T08:40:36.079Z] 8553.00 IOPS, 66.82 MiB/s [2024-11-20T08:40:37.013Z] 8564.80 IOPS, 66.91 MiB/s [2024-11-20T08:40:37.949Z] 8565.00 IOPS, 66.91 MiB/s [2024-11-20T08:40:38.883Z] 8573.43 IOPS, 66.98 MiB/s [2024-11-20T08:40:39.819Z] 8576.50 IOPS, 67.00 MiB/s [2024-11-20T08:40:41.195Z] 8580.22 IOPS, 67.03 MiB/s [2024-11-20T08:40:41.195Z] 8576.60 IOPS, 67.00 MiB/s 00:08:17.863 Latency(us) 00:08:17.863 [2024-11-20T08:40:41.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.863 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:17.863 Verification LBA range: start 0x0 length 0x1000 00:08:17.863 Nvme1n1 : 10.01 8578.03 67.02 0.00 0.00 14878.48 1738.13 23023.08 00:08:17.863 [2024-11-20T08:40:41.195Z] =================================================================================================================== 00:08:17.863 [2024-11-20T08:40:41.195Z] Total : 8578.03 67.02 0.00 0.00 14878.48 1738.13 23023.08 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2788675 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.863 { 00:08:17.863 "params": { 00:08:17.863 "name": "Nvme$subsystem", 00:08:17.863 "trtype": "$TEST_TRANSPORT", 00:08:17.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.863 "adrfam": "ipv4", 00:08:17.863 "trsvcid": "$NVMF_PORT", 00:08:17.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.863 "hdgst": ${hdgst:-false}, 00:08:17.863 "ddgst": ${ddgst:-false} 00:08:17.863 }, 00:08:17.863 "method": "bdev_nvme_attach_controller" 00:08:17.863 } 00:08:17.863 EOF 00:08:17.863 )") 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:17.863 [2024-11-20 09:40:40.939642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.863 [2024-11-20 09:40:40.939676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:17.863 09:40:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.864 "params": { 00:08:17.864 "name": "Nvme1", 00:08:17.864 "trtype": "tcp", 00:08:17.864 "traddr": "10.0.0.2", 00:08:17.864 "adrfam": "ipv4", 00:08:17.864 "trsvcid": "4420", 00:08:17.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.864 "hdgst": false, 00:08:17.864 "ddgst": false 00:08:17.864 }, 00:08:17.864 "method": "bdev_nvme_attach_controller" 00:08:17.864 }' 00:08:17.864 [2024-11-20 09:40:40.951644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:40.951657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:40.963671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:40.963681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:40.975703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:40.975713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:40.977348] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:08:17.864 [2024-11-20 09:40:40.977391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788675 ] 00:08:17.864 [2024-11-20 09:40:40.987738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:40.987749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:40.999767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:40.999777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.011804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.011814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.023835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.023845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.035866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.035875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.047902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.047914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.051543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.864 [2024-11-20 09:40:41.059933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.059950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.071968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.071981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.084022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.084037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.093405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.864 [2024-11-20 09:40:41.096032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.096044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.108075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.108094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.120102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.120121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.132135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.132149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.144166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.144179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.156200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.156215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.168227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.168238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.180272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.180295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.864 [2024-11-20 09:40:41.192315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.864 [2024-11-20 09:40:41.192338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.204337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.204356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.216365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.216381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.228392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.228402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.240426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.240436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.252456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.252465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.264493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.264508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.276520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.276530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.288551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.288561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.300588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.300600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.312622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.312636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.320639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.320650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.328661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.328671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.336682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.336693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.348717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.348730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.357234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.357253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 Running I/O for 5 seconds... 00:08:18.123 [2024-11-20 09:40:41.368775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.368789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.376978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.376996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.387880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.387900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.403162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.403182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.413957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.414004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.422967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.422987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.432563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.432583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.442111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.442137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.123 [2024-11-20 09:40:41.451677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.123 [2024-11-20 09:40:41.451697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.460539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.460559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.469225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.469244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.478699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.478719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.493234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.493254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.507210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.507230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.521106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.521136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.535374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.535395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.544295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.544313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.553611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.553629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.568251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.568272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.577522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.577542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.587088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.587107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.595935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.595963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.605301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.605322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.614804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.614828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.623657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.623676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.632445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.632464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.641971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.641991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.651152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.651172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.665475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.665494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.679727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.679747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.688632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.688650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.698086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.698105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.382 [2024-11-20 09:40:41.707320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.382 [2024-11-20 09:40:41.707339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.722794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.722816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.738153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.738179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.747711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.747731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.756498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.756518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.765209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.765229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.780002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.780022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.793608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.793628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.801341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.801360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.810787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.810808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.820075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.820103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.829537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.829556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.838191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.838211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.847507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.847526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.856970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.856991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.871752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.871772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.887006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.887026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.896072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.896092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.910217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.910237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.923963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.923983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.933101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.933120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.947681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.947701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.641 [2024-11-20 09:40:41.961893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.641 [2024-11-20 09:40:41.961912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:41.976147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:41.976169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:41.986856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:41.986875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:41.995677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:41.995696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.010706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.010726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.025778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.025798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.034961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.034996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.044274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.044298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.053697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.053716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.068617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.068637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.082803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.082823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.093831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.093851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.103163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.103182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.118361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.118381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.133432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.133452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.142789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.142807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.152429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.152448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.161256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.161279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.169993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.170012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.184582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.184601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.193816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.193835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.203399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.203418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.212859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.212878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.900 [2024-11-20 09:40:42.222254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.900 [2024-11-20 09:40:42.222272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.237292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.237313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.248442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.248463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.257555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.257574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.267030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.267050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.276478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.276497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.291562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.291581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.302805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.302825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.311715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.311735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.321113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.321132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.330606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.330627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.158 [2024-11-20 09:40:42.345380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.158 [2024-11-20 09:40:42.345401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.354412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.354431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.368773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.368793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 16277.00 IOPS, 127.16 MiB/s [2024-11-20T08:40:42.491Z] [2024-11-20 09:40:42.377655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.377674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.386409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.386428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.401266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.401286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.415001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.415020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.423946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.423970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.433284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.433303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.442094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.442114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.456704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.456723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.465849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.465869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.474853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.474872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.159 [2024-11-20 09:40:42.483497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.159 [2024-11-20 09:40:42.483516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.493186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.493209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.507917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.507938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.515650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.515669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.524831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.524850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.534389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.534409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.543066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.543085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.552535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.552554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.561994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.562015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.571165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.571183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.586160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.586180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.417 [2024-11-20 09:40:42.597059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.417 [2024-11-20 09:40:42.597078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.606651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.606671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.615525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.615545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.624877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.624897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.633725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.633744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.648277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.648296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.662169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.662190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.675813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.675832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.684915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.684934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.694266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.694286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.703078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.703098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.717802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.717821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.731515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.731535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.418 [2024-11-20 09:40:42.745991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.418 [2024-11-20 09:40:42.746011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.757109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.757130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.767193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.767213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.781783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.781804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.790788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.790808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.800329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.800348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.809685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.809704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.818568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.818587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.833157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.833176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.842250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.842269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.856478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.856497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.871044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.871072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.881889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.881909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.896428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.896448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.905305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.905325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.914953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.914988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.923610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.923630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.933008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.933026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.947821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.947840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.958788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.958809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.968422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.968441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.977724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.676 [2024-11-20 09:40:42.977744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.676 [2024-11-20 09:40:42.986631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.677 [2024-11-20 09:40:42.986651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.677 [2024-11-20 09:40:43.001402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.677 [2024-11-20 09:40:43.001422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.015783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.015806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.026886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.026907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.041538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.041558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.049256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.049275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.063080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.063099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.072119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.072138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.086226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.086249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.099847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.099867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.108605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.108624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.122985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.123006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.136615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.136635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.145463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.145483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.154076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.154096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.163335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.163354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.177727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.177747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.186605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.186626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.195773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.195793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.205328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.205347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.214143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.214161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.229274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.229295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.240258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.240279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.935 [2024-11-20 09:40:43.255002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.935 [2024-11-20 09:40:43.255022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.265976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.265998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.280609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.280631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.291577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.291601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.306132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.306157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.316912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.316933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.325994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.326014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.335404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.335423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.350031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.350051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.363867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.363887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.372784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.372803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 16410.00 IOPS, 128.20 MiB/s [2024-11-20T08:40:43.526Z] [2024-11-20 09:40:43.382082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.382101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.391885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.391904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.406874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.406893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.418182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.418201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.427104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.427123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.436563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.436583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.445989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.446008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.460879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.460899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.474449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.474468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.488081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.488100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.502283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.502303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.194 [2024-11-20 09:40:43.511459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.194 [2024-11-20 09:40:43.511478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.526112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.526135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.535263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.535285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.544095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.544115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.553215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.553234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.562669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.562689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.577295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.577314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.586356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.586375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.595234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.595253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.604494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.604513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.613407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.613426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.628328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.628347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.643541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.643562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.652538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.652557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.661453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.661472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.670321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.670340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.684842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.684862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.698894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.698914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.706621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.706640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.715489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.715508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.724668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.724687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.739139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.739158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.753142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.753163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.762092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.762112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.771522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.771542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.453 [2024-11-20 09:40:43.780287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.453 [2024-11-20 09:40:43.780307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.795255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.795276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.802709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.802728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.811862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.811882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.820750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.820769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.830133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.830152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.844854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.844873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.853937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.853961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.868204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.868235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.877297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.877316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.885933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.885956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.900324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.900343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.909198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.909217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.918591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.918611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.927482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.927501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.936867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.936886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.951476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.951495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.960545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.960564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.969729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.969748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.979233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.979252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:43.987855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:43.987874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:44.002604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:44.002624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:44.016743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:44.016762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:44.024294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:44.024313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.712 [2024-11-20 09:40:44.033445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.712 [2024-11-20 09:40:44.033464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.043078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.043100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.057965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.057986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.067132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.067152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.076196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.076215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.091492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.091512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.102361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.102381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.116937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.116962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.127708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.127728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.137127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.137148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.145754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.145773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.155200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.155219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.169910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.169930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.178888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.178907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.188421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.188440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.197633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.197651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.206900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.206919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.221815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.221835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.233205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.233224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.247671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.247691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.256568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.256587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.271223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.271243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.285217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.285236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.972 [2024-11-20 09:40:44.294265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.972 [2024-11-20 09:40:44.294285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.303173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.303194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.318320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.318341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.328400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.328420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.342649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.342676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.353542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.353563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.362464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.362484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.371683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.371703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 16455.67 IOPS, 128.56 MiB/s [2024-11-20T08:40:44.563Z] [2024-11-20 09:40:44.381151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.381172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.395626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.395646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.404938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.404964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.413685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.413704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.422935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.422960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.432465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.432484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.441870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.441889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.451322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.451341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.460575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.460594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.469349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.469368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.478623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.478642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.488165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.488184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.497519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.497538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.511851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.511871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.525401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.525421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.534552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.534575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.231 [2024-11-20 09:40:44.549500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.231 [2024-11-20 09:40:44.549520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.565989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.566013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.581688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.581710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.596194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.596214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.607826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.607846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.622332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.622352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.631424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.631444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.640396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.640415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.649884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.649903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.664865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.664885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.680104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.680124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.689625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.689645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.699292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.699311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.708004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.708023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.717271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.717290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.731899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.731919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.745381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.745401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.754552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.754572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.763829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.763852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.773070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.773090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.787667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.787687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.796608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.796627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.806330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.806350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.490 [2024-11-20 09:40:44.816028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.490 [2024-11-20 09:40:44.816050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.748 [2024-11-20 09:40:44.825027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.748 [2024-11-20 09:40:44.825049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.748 [2024-11-20 09:40:44.839702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.748 [2024-11-20 09:40:44.839722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.748 [2024-11-20 09:40:44.848542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.748 [2024-11-20 09:40:44.848561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.748 [2024-11-20 09:40:44.857376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.748 [2024-11-20 09:40:44.857394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.748 [2024-11-20 09:40:44.866164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.748 [2024-11-20 09:40:44.866183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.748 [2024-11-20 09:40:44.874856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.748 [2024-11-20 09:40:44.874874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.889406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.889426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.898334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.898353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.908034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.908054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.917374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.917393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.926757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.926776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.941529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.941548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.950562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.950581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.959478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.959497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.969015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.969034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.977639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.977658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:44.992585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:44.992605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:45.003219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:45.003238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:45.018109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:45.018129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:45.029319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:45.029338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:45.038421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:45.038440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:45.053444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:45.053463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:45.064217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:45.064237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.749 [2024-11-20 09:40:45.072988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.749 [2024-11-20 09:40:45.073008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.081982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.082003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.091339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.091359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.105884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.105904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.119733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.119753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.130549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.130569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.144807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.144826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.158403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.158423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.172379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.172398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.181348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.181368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.190925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.190944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.200537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.200557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.215109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.215129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.228534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.228553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.243051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.243070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.253923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.253942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.263687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.263706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.273266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.273286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.282756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.282775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.292138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.292157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.301466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.301486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.310872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.310890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.319537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.319556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.007 [2024-11-20 09:40:45.334256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.007 [2024-11-20 09:40:45.334276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.265 [2024-11-20 09:40:45.342078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.265 [2024-11-20 09:40:45.342100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.265 [2024-11-20 09:40:45.355864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.265 [2024-11-20 09:40:45.355884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.265 [2024-11-20 09:40:45.364662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.265 [2024-11-20 09:40:45.364681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.265 [2024-11-20 09:40:45.374175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.265 [2024-11-20 09:40:45.374194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.265 16462.75 IOPS, 128.62 MiB/s [2024-11-20T08:40:45.597Z] [2024-11-20 09:40:45.388423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.265 [2024-11-20 09:40:45.388442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.265 [2024-11-20 09:40:45.397394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.397414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.406745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.406766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.416107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.416127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.424952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.424971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.439469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.439489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.453231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.453251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.462312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.462331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.471131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.471150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.480345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.480364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.494663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.494683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.503598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.503618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.513033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.513053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.522720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.522739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.532144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.532162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.546752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.546772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.560336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.560356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.569635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.569654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.584218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.584241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.266 [2024-11-20 09:40:45.593197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.266 [2024-11-20 09:40:45.593217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.602800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.602821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.611711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.611730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.620921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.620941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.630549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.630568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.645196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.645216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.658779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.658800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.672874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.672894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.682069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.682089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.691267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.691286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.700678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.700697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.715199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.715219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.728618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.728637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.738360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.738381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.747858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.747879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.757139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.757159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.771606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.771627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.785039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.785060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.794080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.794104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.524 [2024-11-20 09:40:45.803579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.524 [2024-11-20 09:40:45.803597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.525 [2024-11-20 09:40:45.812436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.525 [2024-11-20 09:40:45.812455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.525 [2024-11-20 09:40:45.826954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.525 [2024-11-20 09:40:45.826989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.525 [2024-11-20 09:40:45.841050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.525 [2024-11-20 09:40:45.841070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.855418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.855440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.866806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.866827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.875527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.875548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.890479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.890499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.906008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.906028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.920621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.920641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.929742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.929761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.939280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.939299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.954125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.954145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.967728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.967748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.982126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.982147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:45.992553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:45.992573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.006935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.006961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.021329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.021348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.035053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.035077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.044326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.044351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.053999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.054018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.063689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.063709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.078036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.078055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.086921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.086940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.096319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.096338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.783 [2024-11-20 09:40:46.110784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.783 [2024-11-20 09:40:46.110809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.123928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.123959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.138399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.138419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.147273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.147293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.161795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.161814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.170733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.170751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.179480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.179499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.194349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.194368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.203197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.203216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.211888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.211906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.226733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.226752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.238025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.238055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.252872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.252892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.260686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.260704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.269737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.269755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.279536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.279554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.294119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.294138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.305397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.305416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.314652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.314671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.324443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.324461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.333256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.333275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.348230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.348250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.042 [2024-11-20 09:40:46.363121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.042 [2024-11-20 09:40:46.363141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.301 [2024-11-20 09:40:46.372247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.301 [2024-11-20 09:40:46.372268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.301 [2024-11-20 09:40:46.381028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.301 [2024-11-20 09:40:46.381049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.301 16471.80 IOPS, 128.69 MiB/s 00:08:23.301 Latency(us) 00:08:23.301 [2024-11-20T08:40:46.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.301 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:23.301 Nvme1n1 : 5.01 16475.19 128.71 0.00 0.00 7762.35 3305.29 18008.15 00:08:23.301 [2024-11-20T08:40:46.633Z] =================================================================================================================== 00:08:23.301 [2024-11-20T08:40:46.633Z] Total : 16475.19 128.71 0.00 0.00 7762.35 3305.29 18008.15 00:08:23.302 [2024-11-20 09:40:46.391479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.391497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.403509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.403525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.423581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.423610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.431588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.431603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.443622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.443641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.455654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.455673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.467684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.467700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.479715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.479730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.491749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.491765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.503775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.503785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.515810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.515823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.527841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.527856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 [2024-11-20 09:40:46.539869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.302 [2024-11-20 09:40:46.539880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2788675) - No such process 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2788675 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.302 delay0 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.302 09:40:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:23.560 [2024-11-20 09:40:46.736160] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:30.125 Initializing NVMe Controllers 00:08:30.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:30.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:30.125 Initialization complete. Launching workers. 00:08:30.125 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 836 00:08:30.125 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1118, failed to submit 38 00:08:30.125 success 928, unsuccessful 190, failed 0 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.125 rmmod nvme_tcp 00:08:30.125 rmmod nvme_fabrics 00:08:30.125 rmmod nvme_keyring 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2786815 ']' 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2786815 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2786815 ']' 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2786815 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.125 09:40:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2786815 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2786815' 00:08:30.125 killing process with pid 2786815 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2786815 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2786815 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.125 09:40:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.031 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:32.031 00:08:32.031 real 0m31.471s 00:08:32.031 user 0m42.069s 00:08:32.031 sys 0m11.205s 00:08:32.031 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.031 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.031 ************************************ 00:08:32.031 END TEST nvmf_zcopy 00:08:32.031 ************************************ 00:08:32.031 09:40:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:32.031 09:40:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.031 09:40:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.031 09:40:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.031 ************************************ 00:08:32.031 START TEST nvmf_nmic 00:08:32.031 ************************************ 00:08:32.031 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:32.291 * Looking for test storage... 00:08:32.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # lcov --version 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.291 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:08:32.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.292 --rc genhtml_branch_coverage=1 00:08:32.292 --rc genhtml_function_coverage=1 00:08:32.292 --rc genhtml_legend=1 00:08:32.292 --rc geninfo_all_blocks=1 00:08:32.292 --rc geninfo_unexecuted_blocks=1 00:08:32.292 00:08:32.292 ' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:08:32.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.292 --rc genhtml_branch_coverage=1 00:08:32.292 --rc genhtml_function_coverage=1 00:08:32.292 --rc genhtml_legend=1 00:08:32.292 --rc geninfo_all_blocks=1 00:08:32.292 --rc geninfo_unexecuted_blocks=1 00:08:32.292 00:08:32.292 ' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:08:32.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.292 --rc genhtml_branch_coverage=1 00:08:32.292 --rc genhtml_function_coverage=1 00:08:32.292 --rc genhtml_legend=1 00:08:32.292 --rc geninfo_all_blocks=1 00:08:32.292 --rc geninfo_unexecuted_blocks=1 00:08:32.292 00:08:32.292 ' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:08:32.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.292 --rc genhtml_branch_coverage=1 00:08:32.292 --rc genhtml_function_coverage=1 00:08:32.292 --rc genhtml_legend=1 00:08:32.292 --rc geninfo_all_blocks=1 00:08:32.292 --rc geninfo_unexecuted_blocks=1 00:08:32.292 00:08:32.292 ' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:32.292 09:40:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:38.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:38.867 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.867 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:38.868 Found net devices under 0000:86:00.0: cvl_0_0 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:38.868 Found net devices under 0000:86:00.1: cvl_0_1 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:38.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:08:38.868 00:08:38.868 --- 10.0.0.2 ping statistics --- 00:08:38.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.868 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:08:38.868 00:08:38.868 --- 10.0.0.1 ping statistics --- 00:08:38.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.868 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2794241 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2794241 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2794241 ']' 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 [2024-11-20 09:41:01.624615] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:08:38.868 [2024-11-20 09:41:01.624670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.868 [2024-11-20 09:41:01.705287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.868 [2024-11-20 09:41:01.749952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.868 [2024-11-20 09:41:01.749991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.868 [2024-11-20 09:41:01.749998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.868 [2024-11-20 09:41:01.750005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.868 [2024-11-20 09:41:01.750010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.868 [2024-11-20 09:41:01.751539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.868 [2024-11-20 09:41:01.751650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.868 [2024-11-20 09:41:01.751735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.868 [2024-11-20 09:41:01.751734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 [2024-11-20 09:41:01.889338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 Malloc0 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.868 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 [2024-11-20 09:41:01.951594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:38.869 test case1: single bdev can't be used in multiple subsystems 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 [2024-11-20 09:41:01.979507] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:38.869 [2024-11-20 09:41:01.979529] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:38.869 [2024-11-20 09:41:01.979536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.869 request: 00:08:38.869 { 00:08:38.869 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:38.869 "namespace": { 00:08:38.869 "bdev_name": "Malloc0", 00:08:38.869 "no_auto_visible": false 00:08:38.869 }, 00:08:38.869 "method": "nvmf_subsystem_add_ns", 00:08:38.869 "req_id": 1 00:08:38.869 } 00:08:38.869 Got JSON-RPC error response 00:08:38.869 response: 00:08:38.869 { 00:08:38.869 "code": -32602, 00:08:38.869 "message": "Invalid parameters" 00:08:38.869 } 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:38.869 Adding namespace failed - expected result. 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:38.869 test case2: host connect to nvmf target in multiple paths 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:38.869 [2024-11-20 09:41:01.991659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.869 09:41:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:40.311 09:41:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:41.248 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:41.248 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:41.248 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:41.248 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:41.248 09:41:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:43.153 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:43.153 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:43.153 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.153 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:43.153 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.153 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:43.153 09:41:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:43.153 [global] 00:08:43.153 thread=1 00:08:43.153 invalidate=1 00:08:43.153 rw=write 00:08:43.153 time_based=1 00:08:43.153 runtime=1 00:08:43.153 ioengine=libaio 00:08:43.153 direct=1 00:08:43.153 bs=4096 00:08:43.153 iodepth=1 00:08:43.153 norandommap=0 00:08:43.153 numjobs=1 00:08:43.153 00:08:43.153 verify_dump=1 00:08:43.153 verify_backlog=512 00:08:43.153 verify_state_save=0 00:08:43.153 do_verify=1 00:08:43.153 verify=crc32c-intel 00:08:43.153 [job0] 00:08:43.153 filename=/dev/nvme0n1 00:08:43.153 Could not set queue depth (nvme0n1) 00:08:43.412 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.412 fio-3.35 00:08:43.412 Starting 1 thread 00:08:44.791 00:08:44.791 job0: (groupid=0, jobs=1): err= 0: pid=2795137: Wed Nov 20 09:41:07 2024 00:08:44.791 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:08:44.791 slat (nsec): min=9556, max=24386, avg=21714.41, stdev=4649.78 00:08:44.791 clat (usec): min=40740, max=41937, avg=41107.86, stdev=334.78 00:08:44.791 lat (usec): min=40765, max=41959, avg=41129.58, stdev=333.59 00:08:44.791 clat percentiles (usec): 00:08:44.791 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:44.791 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:44.791 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:08:44.791 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:08:44.791 | 99.99th=[41681] 00:08:44.791 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:08:44.791 slat (usec): min=9, max=27577, avg=65.08, stdev=1218.29 00:08:44.791 clat (usec): min=115, max=348, avg=163.10, stdev=28.40 00:08:44.791 lat (usec): min=125, max=27809, avg=228.18, stdev=1221.64 00:08:44.791 clat percentiles (usec): 00:08:44.791 | 1.00th=[ 122], 5.00th=[ 126], 10.00th=[ 129], 20.00th=[ 133], 00:08:44.791 | 30.00th=[ 137], 40.00th=[ 163], 50.00th=[ 172], 60.00th=[ 178], 00:08:44.791 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 194], 00:08:44.791 | 99.00th=[ 231], 99.50th=[ 285], 99.90th=[ 351], 99.95th=[ 351], 00:08:44.791 | 99.99th=[ 351] 00:08:44.791 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:44.791 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:44.791 lat (usec) : 250=95.13%, 500=0.75% 00:08:44.791 lat (msec) : 50=4.12% 00:08:44.791 cpu : usr=0.39%, sys=0.59%, ctx=537, majf=0, minf=1 00:08:44.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:44.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.791 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:44.791 00:08:44.791 Run status group 0 (all jobs): 00:08:44.791 READ: bw=86.0KiB/s (88.1kB/s), 86.0KiB/s-86.0KiB/s (88.1kB/s-88.1kB/s), io=88.0KiB (90.1kB), run=1023-1023msec 00:08:44.791 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:08:44.791 00:08:44.791 Disk stats (read/write): 00:08:44.791 nvme0n1: ios=70/512, merge=0/0, ticks=1003/85, in_queue=1088, util=98.60% 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:44.791 09:41:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:44.791 rmmod nvme_tcp 00:08:44.791 rmmod nvme_fabrics 00:08:44.791 rmmod nvme_keyring 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2794241 ']' 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2794241 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2794241 ']' 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2794241 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2794241 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2794241' 00:08:44.791 killing process with pid 2794241 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2794241 00:08:44.791 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2794241 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.050 09:41:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.587 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.587 00:08:47.587 real 0m14.987s 00:08:47.587 user 0m32.981s 00:08:47.587 sys 0m5.283s 00:08:47.587 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.587 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:47.587 ************************************ 00:08:47.587 END TEST nvmf_nmic 00:08:47.587 ************************************ 00:08:47.587 09:41:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:47.587 09:41:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.587 09:41:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.587 09:41:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.587 ************************************ 00:08:47.587 START TEST nvmf_fio_target 00:08:47.588 ************************************ 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:47.588 * Looking for test storage... 00:08:47.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # lcov --version 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:08:47.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.588 --rc genhtml_branch_coverage=1 00:08:47.588 --rc genhtml_function_coverage=1 00:08:47.588 --rc genhtml_legend=1 00:08:47.588 --rc geninfo_all_blocks=1 00:08:47.588 --rc geninfo_unexecuted_blocks=1 00:08:47.588 00:08:47.588 ' 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:08:47.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.588 --rc genhtml_branch_coverage=1 00:08:47.588 --rc genhtml_function_coverage=1 00:08:47.588 --rc genhtml_legend=1 00:08:47.588 --rc geninfo_all_blocks=1 00:08:47.588 --rc geninfo_unexecuted_blocks=1 00:08:47.588 00:08:47.588 ' 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:08:47.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.588 --rc genhtml_branch_coverage=1 00:08:47.588 --rc genhtml_function_coverage=1 00:08:47.588 --rc genhtml_legend=1 00:08:47.588 --rc geninfo_all_blocks=1 00:08:47.588 --rc geninfo_unexecuted_blocks=1 00:08:47.588 00:08:47.588 ' 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:08:47.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.588 --rc genhtml_branch_coverage=1 00:08:47.588 --rc genhtml_function_coverage=1 00:08:47.588 --rc genhtml_legend=1 00:08:47.588 --rc geninfo_all_blocks=1 00:08:47.588 --rc geninfo_unexecuted_blocks=1 00:08:47.588 00:08:47.588 ' 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.588 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.589 09:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:54.163 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:54.163 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.163 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:54.164 Found net devices under 0000:86:00.0: cvl_0_0 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:54.164 Found net devices under 0000:86:00.1: cvl_0_1 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:54.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:08:54.164 00:08:54.164 --- 10.0.0.2 ping statistics --- 00:08:54.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.164 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:54.164 00:08:54.164 --- 10.0.0.1 ping statistics --- 00:08:54.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.164 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2798910 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2798910 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2798910 ']' 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.164 [2024-11-20 09:41:16.641788] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:08:54.164 [2024-11-20 09:41:16.641840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.164 [2024-11-20 09:41:16.721003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.164 [2024-11-20 09:41:16.764638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.164 [2024-11-20 09:41:16.764676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.164 [2024-11-20 09:41:16.764683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.164 [2024-11-20 09:41:16.764689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.164 [2024-11-20 09:41:16.764694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.164 [2024-11-20 09:41:16.766297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.164 [2024-11-20 09:41:16.766330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.164 [2024-11-20 09:41:16.766437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.164 [2024-11-20 09:41:16.766438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.164 09:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:54.164 [2024-11-20 09:41:17.080631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.164 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.164 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:54.164 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.423 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:54.423 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.682 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:54.682 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.682 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:54.682 09:41:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:54.941 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.200 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:55.200 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.459 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:55.459 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.718 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:55.718 09:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:55.718 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:55.978 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:55.978 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.237 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:56.237 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.497 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.497 [2024-11-20 09:41:19.785671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.497 09:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:56.756 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:57.015 09:41:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.394 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:58.394 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:58.394 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.394 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:58.394 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:58.394 09:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:00.320 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:00.320 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:00.320 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.320 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:00.320 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.320 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:00.320 09:41:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:00.320 [global] 00:09:00.320 thread=1 00:09:00.320 invalidate=1 00:09:00.320 rw=write 00:09:00.320 time_based=1 00:09:00.320 runtime=1 00:09:00.320 ioengine=libaio 00:09:00.320 direct=1 00:09:00.320 bs=4096 00:09:00.320 iodepth=1 00:09:00.320 norandommap=0 00:09:00.320 numjobs=1 00:09:00.320 00:09:00.320 verify_dump=1 00:09:00.320 verify_backlog=512 00:09:00.320 verify_state_save=0 00:09:00.320 do_verify=1 00:09:00.320 verify=crc32c-intel 00:09:00.320 [job0] 00:09:00.320 filename=/dev/nvme0n1 00:09:00.320 [job1] 00:09:00.320 filename=/dev/nvme0n2 00:09:00.320 [job2] 00:09:00.320 filename=/dev/nvme0n3 00:09:00.320 [job3] 00:09:00.320 filename=/dev/nvme0n4 00:09:00.320 Could not set queue depth (nvme0n1) 00:09:00.320 Could not set queue depth (nvme0n2) 00:09:00.320 Could not set queue depth (nvme0n3) 00:09:00.320 Could not set queue depth (nvme0n4) 00:09:00.584 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.584 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.584 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.584 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.584 fio-3.35 00:09:00.584 Starting 4 threads 00:09:01.956 00:09:01.956 job0: (groupid=0, jobs=1): err= 0: pid=2800257: Wed Nov 20 09:41:24 2024 00:09:01.956 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:01.956 slat (nsec): min=7155, max=39418, avg=8287.35, stdev=1584.21 00:09:01.956 clat (usec): min=153, max=436, avg=192.92, stdev=15.16 00:09:01.956 lat (usec): min=163, max=444, avg=201.21, stdev=15.23 00:09:01.956 clat percentiles (usec): 00:09:01.956 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:09:01.956 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:09:01.956 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 215], 00:09:01.956 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 293], 00:09:01.956 | 99.99th=[ 437] 00:09:01.956 write: IOPS=2913, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:09:01.956 slat (nsec): min=10560, max=46495, avg=12102.33, stdev=2231.04 00:09:01.956 clat (usec): min=113, max=325, avg=148.46, stdev=23.71 00:09:01.956 lat (usec): min=124, max=347, avg=160.57, stdev=24.41 00:09:01.956 clat percentiles (usec): 00:09:01.956 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 131], 00:09:01.956 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 147], 00:09:01.956 | 70.00th=[ 153], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 192], 00:09:01.956 | 99.00th=[ 237], 99.50th=[ 251], 99.90th=[ 289], 99.95th=[ 310], 00:09:01.956 | 99.99th=[ 326] 00:09:01.956 bw ( KiB/s): min=12288, max=12288, per=71.83%, avg=12288.00, stdev= 0.00, samples=1 00:09:01.956 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:01.956 lat (usec) : 250=99.29%, 500=0.71% 00:09:01.956 cpu : usr=4.00%, sys=9.30%, ctx=5477, majf=0, minf=1 00:09:01.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.956 issued rwts: total=2560,2916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.956 job1: (groupid=0, jobs=1): err= 0: pid=2800258: Wed Nov 20 09:41:24 2024 00:09:01.956 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:09:01.956 slat (nsec): min=21067, max=25775, avg=22492.09, stdev=1213.87 00:09:01.956 clat (usec): min=40790, max=41303, avg=40983.56, stdev=101.72 00:09:01.956 lat (usec): min=40812, max=41325, avg=41006.05, stdev=101.29 00:09:01.956 clat percentiles (usec): 00:09:01.956 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:01.956 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.956 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:01.956 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:01.956 | 99.99th=[41157] 00:09:01.956 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:09:01.956 slat (nsec): min=10196, max=37929, avg=12054.72, stdev=2264.61 00:09:01.956 clat (usec): min=137, max=710, avg=176.04, stdev=39.39 00:09:01.956 lat (usec): min=148, max=721, avg=188.09, stdev=39.50 00:09:01.956 clat percentiles (usec): 00:09:01.956 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:09:01.956 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:09:01.956 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 202], 00:09:01.956 | 99.00th=[ 249], 99.50th=[ 545], 99.90th=[ 709], 99.95th=[ 709], 00:09:01.956 | 99.99th=[ 709] 00:09:01.956 bw ( KiB/s): min= 4096, max= 4096, per=23.94%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.956 lat (usec) : 250=94.77%, 500=0.37%, 750=0.56% 00:09:01.956 lat (msec) : 50=4.30% 00:09:01.956 cpu : usr=0.19%, sys=1.06%, ctx=535, majf=0, minf=1 00:09:01.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.956 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.956 job2: (groupid=0, jobs=1): err= 0: pid=2800259: Wed Nov 20 09:41:24 2024 00:09:01.956 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:09:01.956 slat (nsec): min=13824, max=24247, avg=23085.55, stdev=2125.97 00:09:01.956 clat (usec): min=40699, max=41999, avg=41181.60, stdev=432.00 00:09:01.956 lat (usec): min=40713, max=42023, avg=41204.69, stdev=432.57 00:09:01.956 clat percentiles (usec): 00:09:01.956 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:01.956 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.956 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:01.956 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:01.956 | 99.99th=[42206] 00:09:01.956 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:09:01.956 slat (nsec): min=10000, max=37608, avg=12611.66, stdev=2881.03 00:09:01.956 clat (usec): min=137, max=354, avg=193.45, stdev=32.78 00:09:01.956 lat (usec): min=149, max=391, avg=206.06, stdev=32.23 00:09:01.956 clat percentiles (usec): 00:09:01.956 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:09:01.956 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 190], 60.00th=[ 200], 00:09:01.956 | 70.00th=[ 210], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 251], 00:09:01.956 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 355], 99.95th=[ 355], 00:09:01.956 | 99.99th=[ 355] 00:09:01.956 bw ( KiB/s): min= 4096, max= 4096, per=23.94%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.956 lat (usec) : 250=90.82%, 500=5.06% 00:09:01.956 lat (msec) : 50=4.12% 00:09:01.956 cpu : usr=0.20%, sys=0.69%, ctx=537, majf=0, minf=1 00:09:01.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.957 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.957 job3: (groupid=0, jobs=1): err= 0: pid=2800260: Wed Nov 20 09:41:24 2024 00:09:01.957 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:09:01.957 slat (nsec): min=9559, max=22886, avg=13227.95, stdev=3864.86 00:09:01.957 clat (usec): min=40833, max=41998, avg=41110.06, stdev=358.57 00:09:01.957 lat (usec): min=40842, max=42009, avg=41123.29, stdev=357.80 00:09:01.957 clat percentiles (usec): 00:09:01.957 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:01.957 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.957 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:01.957 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:01.957 | 99.99th=[42206] 00:09:01.957 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:09:01.957 slat (nsec): min=9420, max=40744, avg=12141.32, stdev=3408.11 00:09:01.957 clat (usec): min=127, max=346, avg=210.67, stdev=38.06 00:09:01.957 lat (usec): min=138, max=387, avg=222.81, stdev=39.24 00:09:01.957 clat percentiles (usec): 00:09:01.957 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 172], 00:09:01.957 | 30.00th=[ 184], 40.00th=[ 196], 50.00th=[ 223], 60.00th=[ 237], 00:09:01.957 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 262], 00:09:01.957 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 347], 99.95th=[ 347], 00:09:01.957 | 99.99th=[ 347] 00:09:01.957 bw ( KiB/s): min= 4096, max= 4096, per=23.94%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.957 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.957 lat (usec) : 250=85.58%, 500=10.30% 00:09:01.957 lat (msec) : 50=4.12% 00:09:01.957 cpu : usr=0.00%, sys=0.88%, ctx=534, majf=0, minf=2 00:09:01.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.957 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.957 00:09:01.957 Run status group 0 (all jobs): 00:09:01.957 READ: bw=9.86MiB/s (10.3MB/s), 86.3KiB/s-9.99MiB/s (88.3kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1041msec 00:09:01.957 WRITE: bw=16.7MiB/s (17.5MB/s), 1967KiB/s-11.4MiB/s (2015kB/s-11.9MB/s), io=17.4MiB (18.2MB), run=1001-1041msec 00:09:01.957 00:09:01.957 Disk stats (read/write): 00:09:01.957 nvme0n1: ios=2218/2560, merge=0/0, ticks=465/346, in_queue=811, util=85.67% 00:09:01.957 nvme0n2: ios=68/512, merge=0/0, ticks=805/87, in_queue=892, util=90.75% 00:09:01.957 nvme0n3: ios=43/512, merge=0/0, ticks=1645/97, in_queue=1742, util=93.44% 00:09:01.957 nvme0n4: ios=74/512, merge=0/0, ticks=776/109, in_queue=885, util=95.49% 00:09:01.957 09:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:01.957 [global] 00:09:01.957 thread=1 00:09:01.957 invalidate=1 00:09:01.957 rw=randwrite 00:09:01.957 time_based=1 00:09:01.957 runtime=1 00:09:01.957 ioengine=libaio 00:09:01.957 direct=1 00:09:01.957 bs=4096 00:09:01.957 iodepth=1 00:09:01.957 norandommap=0 00:09:01.957 numjobs=1 00:09:01.957 00:09:01.957 verify_dump=1 00:09:01.957 verify_backlog=512 00:09:01.957 verify_state_save=0 00:09:01.957 do_verify=1 00:09:01.957 verify=crc32c-intel 00:09:01.957 [job0] 00:09:01.957 filename=/dev/nvme0n1 00:09:01.957 [job1] 00:09:01.957 filename=/dev/nvme0n2 00:09:01.957 [job2] 00:09:01.957 filename=/dev/nvme0n3 00:09:01.957 [job3] 00:09:01.957 filename=/dev/nvme0n4 00:09:01.957 Could not set queue depth (nvme0n1) 00:09:01.957 Could not set queue depth (nvme0n2) 00:09:01.957 Could not set queue depth (nvme0n3) 00:09:01.957 Could not set queue depth (nvme0n4) 00:09:01.957 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.957 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.957 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.957 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.957 fio-3.35 00:09:01.957 Starting 4 threads 00:09:03.329 00:09:03.329 job0: (groupid=0, jobs=1): err= 0: pid=2800640: Wed Nov 20 09:41:26 2024 00:09:03.329 read: IOPS=2183, BW=8735KiB/s (8945kB/s)(8744KiB/1001msec) 00:09:03.329 slat (nsec): min=4206, max=30529, avg=7574.23, stdev=1123.29 00:09:03.329 clat (usec): min=173, max=567, avg=250.87, stdev=36.86 00:09:03.329 lat (usec): min=177, max=574, avg=258.45, stdev=36.73 00:09:03.329 clat percentiles (usec): 00:09:03.329 | 1.00th=[ 182], 5.00th=[ 198], 10.00th=[ 212], 20.00th=[ 227], 00:09:03.329 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:09:03.329 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 310], 00:09:03.329 | 99.00th=[ 392], 99.50th=[ 449], 99.90th=[ 486], 99.95th=[ 494], 00:09:03.329 | 99.99th=[ 570] 00:09:03.329 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:03.329 slat (nsec): min=9308, max=47241, avg=10637.60, stdev=1556.71 00:09:03.329 clat (usec): min=105, max=275, avg=155.02, stdev=21.60 00:09:03.329 lat (usec): min=115, max=320, avg=165.66, stdev=21.81 00:09:03.329 clat percentiles (usec): 00:09:03.329 | 1.00th=[ 113], 5.00th=[ 123], 10.00th=[ 129], 20.00th=[ 137], 00:09:03.329 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 161], 00:09:03.329 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 192], 00:09:03.329 | 99.00th=[ 212], 99.50th=[ 227], 99.90th=[ 262], 99.95th=[ 273], 00:09:03.329 | 99.99th=[ 277] 00:09:03.329 bw ( KiB/s): min=10602, max=10602, per=38.20%, avg=10602.00, stdev= 0.00, samples=1 00:09:03.329 iops : min= 2650, max= 2650, avg=2650.00, stdev= 0.00, samples=1 00:09:03.329 lat (usec) : 250=78.91%, 500=21.07%, 750=0.02% 00:09:03.329 cpu : usr=3.10%, sys=3.80%, ctx=4749, majf=0, minf=1 00:09:03.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.329 issued rwts: total=2186,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.329 job1: (groupid=0, jobs=1): err= 0: pid=2800648: Wed Nov 20 09:41:26 2024 00:09:03.329 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:03.329 slat (nsec): min=7058, max=39547, avg=8832.17, stdev=1669.90 00:09:03.329 clat (usec): min=182, max=522, avg=263.67, stdev=49.79 00:09:03.329 lat (usec): min=190, max=530, avg=272.50, stdev=49.88 00:09:03.329 clat percentiles (usec): 00:09:03.329 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 235], 00:09:03.329 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:09:03.329 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 343], 00:09:03.329 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 515], 99.95th=[ 515], 00:09:03.329 | 99.99th=[ 523] 00:09:03.329 write: IOPS=2487, BW=9950KiB/s (10.2MB/s)(9960KiB/1001msec); 0 zone resets 00:09:03.329 slat (nsec): min=10314, max=58856, avg=12075.51, stdev=2133.82 00:09:03.329 clat (usec): min=114, max=420, avg=159.23, stdev=22.30 00:09:03.329 lat (usec): min=128, max=432, avg=171.31, stdev=22.54 00:09:03.329 clat percentiles (usec): 00:09:03.329 | 1.00th=[ 123], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 141], 00:09:03.329 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 163], 00:09:03.329 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 198], 00:09:03.329 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 243], 99.95th=[ 249], 00:09:03.329 | 99.99th=[ 420] 00:09:03.329 bw ( KiB/s): min= 9136, max= 9136, per=32.92%, avg=9136.00, stdev= 0.00, samples=1 00:09:03.329 iops : min= 2284, max= 2284, avg=2284.00, stdev= 0.00, samples=1 00:09:03.329 lat (usec) : 250=76.51%, 500=23.12%, 750=0.37% 00:09:03.329 cpu : usr=3.10%, sys=8.30%, ctx=4539, majf=0, minf=1 00:09:03.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.329 issued rwts: total=2048,2490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.329 job2: (groupid=0, jobs=1): err= 0: pid=2800663: Wed Nov 20 09:41:26 2024 00:09:03.329 read: IOPS=1013, BW=4055KiB/s (4152kB/s)(4148KiB/1023msec) 00:09:03.329 slat (nsec): min=7053, max=29438, avg=8405.58, stdev=2054.32 00:09:03.329 clat (usec): min=181, max=41065, avg=700.12, stdev=4359.61 00:09:03.329 lat (usec): min=189, max=41088, avg=708.52, stdev=4361.10 00:09:03.329 clat percentiles (usec): 00:09:03.329 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:09:03.329 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:09:03.329 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 258], 00:09:03.329 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:03.329 | 99.99th=[41157] 00:09:03.329 write: IOPS=1501, BW=6006KiB/s (6150kB/s)(6144KiB/1023msec); 0 zone resets 00:09:03.329 slat (usec): min=9, max=3409, avg=12.81, stdev=86.74 00:09:03.329 clat (usec): min=138, max=442, avg=170.21, stdev=19.67 00:09:03.329 lat (usec): min=148, max=3606, avg=183.02, stdev=89.72 00:09:03.329 clat percentiles (usec): 00:09:03.329 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:09:03.329 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:09:03.329 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:09:03.329 | 99.00th=[ 217], 99.50th=[ 255], 99.90th=[ 437], 99.95th=[ 445], 00:09:03.329 | 99.99th=[ 445] 00:09:03.329 bw ( KiB/s): min= 1336, max=10952, per=22.14%, avg=6144.00, stdev=6799.54, samples=2 00:09:03.329 iops : min= 334, max= 2738, avg=1536.00, stdev=1699.88, samples=2 00:09:03.329 lat (usec) : 250=95.10%, 500=4.43% 00:09:03.329 lat (msec) : 50=0.47% 00:09:03.329 cpu : usr=0.88%, sys=2.84%, ctx=2575, majf=0, minf=1 00:09:03.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.329 issued rwts: total=1037,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.329 job3: (groupid=0, jobs=1): err= 0: pid=2800669: Wed Nov 20 09:41:26 2024 00:09:03.329 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:09:03.329 slat (nsec): min=10875, max=27640, avg=18666.59, stdev=5878.44 00:09:03.329 clat (usec): min=40862, max=41256, avg=40995.72, stdev=81.63 00:09:03.329 lat (usec): min=40889, max=41267, avg=41014.39, stdev=78.71 00:09:03.329 clat percentiles (usec): 00:09:03.329 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:03.329 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:03.329 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:03.329 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:03.329 | 99.99th=[41157] 00:09:03.329 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:03.329 slat (nsec): min=11948, max=58910, avg=13657.91, stdev=2772.37 00:09:03.329 clat (usec): min=146, max=309, avg=188.60, stdev=22.28 00:09:03.329 lat (usec): min=160, max=368, avg=202.26, stdev=23.01 00:09:03.329 clat percentiles (usec): 00:09:03.329 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:09:03.329 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:09:03.329 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 233], 00:09:03.329 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 310], 00:09:03.330 | 99.99th=[ 310] 00:09:03.330 bw ( KiB/s): min= 4087, max= 4087, per=14.73%, avg=4087.00, stdev= 0.00, samples=1 00:09:03.330 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:09:03.330 lat (usec) : 250=94.19%, 500=1.69% 00:09:03.330 lat (msec) : 50=4.12% 00:09:03.330 cpu : usr=0.50%, sys=0.99%, ctx=536, majf=0, minf=1 00:09:03.330 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.330 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.330 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.330 00:09:03.330 Run status group 0 (all jobs): 00:09:03.330 READ: bw=20.2MiB/s (21.2MB/s), 87.2KiB/s-8735KiB/s (89.3kB/s-8945kB/s), io=20.7MiB (21.7MB), run=1001-1023msec 00:09:03.330 WRITE: bw=27.1MiB/s (28.4MB/s), 2030KiB/s-9.99MiB/s (2078kB/s-10.5MB/s), io=27.7MiB (29.1MB), run=1001-1023msec 00:09:03.330 00:09:03.330 Disk stats (read/write): 00:09:03.330 nvme0n1: ios=1954/2048, merge=0/0, ticks=1374/323, in_queue=1697, util=89.78% 00:09:03.330 nvme0n2: ios=1794/2048, merge=0/0, ticks=1397/306, in_queue=1703, util=93.71% 00:09:03.330 nvme0n3: ios=1062/1536, merge=0/0, ticks=1463/248, in_queue=1711, util=97.71% 00:09:03.330 nvme0n4: ios=62/512, merge=0/0, ticks=1669/93, in_queue=1762, util=99.48% 00:09:03.330 09:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:03.330 [global] 00:09:03.330 thread=1 00:09:03.330 invalidate=1 00:09:03.330 rw=write 00:09:03.330 time_based=1 00:09:03.330 runtime=1 00:09:03.330 ioengine=libaio 00:09:03.330 direct=1 00:09:03.330 bs=4096 00:09:03.330 iodepth=128 00:09:03.330 norandommap=0 00:09:03.330 numjobs=1 00:09:03.330 00:09:03.330 verify_dump=1 00:09:03.330 verify_backlog=512 00:09:03.330 verify_state_save=0 00:09:03.330 do_verify=1 00:09:03.330 verify=crc32c-intel 00:09:03.330 [job0] 00:09:03.330 filename=/dev/nvme0n1 00:09:03.330 [job1] 00:09:03.330 filename=/dev/nvme0n2 00:09:03.330 [job2] 00:09:03.330 filename=/dev/nvme0n3 00:09:03.330 [job3] 00:09:03.330 filename=/dev/nvme0n4 00:09:03.330 Could not set queue depth (nvme0n1) 00:09:03.330 Could not set queue depth (nvme0n2) 00:09:03.330 Could not set queue depth (nvme0n3) 00:09:03.330 Could not set queue depth (nvme0n4) 00:09:03.586 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.586 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.586 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.586 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.586 fio-3.35 00:09:03.586 Starting 4 threads 00:09:04.955 00:09:04.955 job0: (groupid=0, jobs=1): err= 0: pid=2801117: Wed Nov 20 09:41:28 2024 00:09:04.955 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:09:04.955 slat (nsec): min=1039, max=47912k, avg=181265.01, stdev=1562784.98 00:09:04.955 clat (usec): min=5752, max=71192, avg=22692.33, stdev=15149.36 00:09:04.955 lat (usec): min=5757, max=71197, avg=22873.60, stdev=15222.11 00:09:04.955 clat percentiles (usec): 00:09:04.955 | 1.00th=[ 6652], 5.00th=[10290], 10.00th=[11338], 20.00th=[12125], 00:09:04.955 | 30.00th=[13566], 40.00th=[14222], 50.00th=[15139], 60.00th=[18744], 00:09:04.955 | 70.00th=[25035], 80.00th=[31327], 90.00th=[43254], 95.00th=[64226], 00:09:04.955 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:09:04.955 | 99.99th=[70779] 00:09:04.955 write: IOPS=3103, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1008msec); 0 zone resets 00:09:04.955 slat (nsec): min=1811, max=11146k, avg=138012.55, stdev=656489.40 00:09:04.955 clat (usec): min=3573, max=64280, avg=18395.20, stdev=10098.06 00:09:04.955 lat (usec): min=6629, max=64286, avg=18533.21, stdev=10153.47 00:09:04.955 clat percentiles (usec): 00:09:04.955 | 1.00th=[ 7963], 5.00th=[10421], 10.00th=[10683], 20.00th=[11600], 00:09:04.955 | 30.00th=[12256], 40.00th=[12649], 50.00th=[15533], 60.00th=[17957], 00:09:04.955 | 70.00th=[21365], 80.00th=[22152], 90.00th=[28967], 95.00th=[39584], 00:09:04.955 | 99.00th=[60031], 99.50th=[61604], 99.90th=[64226], 99.95th=[64226], 00:09:04.955 | 99.99th=[64226] 00:09:04.955 bw ( KiB/s): min=12288, max=12288, per=17.33%, avg=12288.00, stdev= 0.00, samples=2 00:09:04.955 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:04.955 lat (msec) : 4=0.02%, 10=3.87%, 20=59.31%, 50=30.92%, 100=5.89% 00:09:04.955 cpu : usr=1.39%, sys=3.18%, ctx=360, majf=0, minf=2 00:09:04.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:04.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.955 issued rwts: total=3072,3128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.955 job1: (groupid=0, jobs=1): err= 0: pid=2801126: Wed Nov 20 09:41:28 2024 00:09:04.955 read: IOPS=5642, BW=22.0MiB/s (23.1MB/s)(23.0MiB/1043msec) 00:09:04.955 slat (nsec): min=1140, max=7688.8k, avg=80037.37, stdev=475155.28 00:09:04.955 clat (usec): min=4037, max=50918, avg=11305.94, stdev=5609.92 00:09:04.955 lat (usec): min=4042, max=53761, avg=11385.98, stdev=5626.24 00:09:04.955 clat percentiles (usec): 00:09:04.955 | 1.00th=[ 6194], 5.00th=[ 7767], 10.00th=[ 8979], 20.00th=[ 9896], 00:09:04.955 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:09:04.955 | 70.00th=[10683], 80.00th=[11338], 90.00th=[12387], 95.00th=[14222], 00:09:04.955 | 99.00th=[47449], 99.50th=[50594], 99.90th=[50594], 99.95th=[51119], 00:09:04.955 | 99.99th=[51119] 00:09:04.955 write: IOPS=5890, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1043msec); 0 zone resets 00:09:04.955 slat (nsec): min=1927, max=7761.5k, avg=81322.82, stdev=498143.92 00:09:04.955 clat (usec): min=4776, max=24349, avg=10629.98, stdev=1681.89 00:09:04.955 lat (usec): min=4784, max=24358, avg=10711.31, stdev=1731.54 00:09:04.955 clat percentiles (usec): 00:09:04.955 | 1.00th=[ 7373], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[ 9896], 00:09:04.955 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:09:04.955 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11994], 95.00th=[13960], 00:09:04.955 | 99.00th=[16909], 99.50th=[16909], 99.90th=[20841], 99.95th=[20841], 00:09:04.955 | 99.99th=[24249] 00:09:04.955 bw ( KiB/s): min=24576, max=24576, per=34.66%, avg=24576.00, stdev= 0.00, samples=2 00:09:04.955 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:09:04.955 lat (msec) : 10=25.65%, 20=72.91%, 50=1.09%, 100=0.35% 00:09:04.955 cpu : usr=4.13%, sys=5.28%, ctx=431, majf=0, minf=1 00:09:04.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:04.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.955 issued rwts: total=5885,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.955 job2: (groupid=0, jobs=1): err= 0: pid=2801143: Wed Nov 20 09:41:28 2024 00:09:04.955 read: IOPS=4167, BW=16.3MiB/s (17.1MB/s)(17.0MiB/1042msec) 00:09:04.955 slat (nsec): min=1435, max=4705.5k, avg=92656.79, stdev=495361.19 00:09:04.955 clat (usec): min=1932, max=52503, avg=13470.32, stdev=7067.89 00:09:04.955 lat (usec): min=1940, max=55596, avg=13562.97, stdev=7094.78 00:09:04.955 clat percentiles (usec): 00:09:04.955 | 1.00th=[ 3392], 5.00th=[ 5407], 10.00th=[ 9634], 20.00th=[10683], 00:09:04.955 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:09:04.955 | 70.00th=[13304], 80.00th=[15664], 90.00th=[19006], 95.00th=[20055], 00:09:04.955 | 99.00th=[48497], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:09:04.955 | 99.99th=[52691] 00:09:04.955 write: IOPS=4913, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1042msec); 0 zone resets 00:09:04.955 slat (usec): min=2, max=20624, avg=99.38, stdev=600.94 00:09:04.955 clat (usec): min=497, max=141849, avg=13935.79, stdev=14068.85 00:09:04.955 lat (usec): min=509, max=141858, avg=14035.17, stdev=14101.89 00:09:04.955 clat percentiles (usec): 00:09:04.955 | 1.00th=[ 1090], 5.00th=[ 4883], 10.00th=[ 8225], 20.00th=[ 10159], 00:09:04.955 | 30.00th=[ 10945], 40.00th=[ 11207], 50.00th=[ 11469], 60.00th=[ 11731], 00:09:04.955 | 70.00th=[ 11863], 80.00th=[ 13304], 90.00th=[ 16319], 95.00th=[ 27657], 00:09:04.955 | 99.00th=[ 98042], 99.50th=[123208], 99.90th=[141558], 99.95th=[141558], 00:09:04.955 | 99.99th=[141558] 00:09:04.955 bw ( KiB/s): min=18448, max=22512, per=28.88%, avg=20480.00, stdev=2873.68, samples=2 00:09:04.955 iops : min= 4612, max= 5628, avg=5120.00, stdev=718.42, samples=2 00:09:04.955 lat (usec) : 500=0.02%, 750=0.06%, 1000=0.12% 00:09:04.955 lat (msec) : 2=1.56%, 4=1.50%, 10=12.50%, 20=77.36%, 50=5.19% 00:09:04.955 lat (msec) : 100=1.19%, 250=0.49% 00:09:04.955 cpu : usr=3.46%, sys=6.92%, ctx=575, majf=0, minf=1 00:09:04.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:04.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.955 issued rwts: total=4343,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.955 job3: (groupid=0, jobs=1): err= 0: pid=2801149: Wed Nov 20 09:41:28 2024 00:09:04.955 read: IOPS=3680, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1008msec) 00:09:04.955 slat (nsec): min=1438, max=15371k, avg=130576.27, stdev=915246.95 00:09:04.955 clat (usec): min=1915, max=45982, avg=15298.49, stdev=5691.23 00:09:04.955 lat (usec): min=4442, max=45993, avg=15429.06, stdev=5748.14 00:09:04.955 clat percentiles (usec): 00:09:04.955 | 1.00th=[ 6390], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11076], 00:09:04.955 | 30.00th=[12387], 40.00th=[12911], 50.00th=[14353], 60.00th=[15533], 00:09:04.955 | 70.00th=[16450], 80.00th=[17171], 90.00th=[21103], 95.00th=[27657], 00:09:04.955 | 99.00th=[36963], 99.50th=[40109], 99.90th=[45876], 99.95th=[45876], 00:09:04.955 | 99.99th=[45876] 00:09:04.955 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:09:04.955 slat (usec): min=2, max=12164, avg=121.36, stdev=626.05 00:09:04.955 clat (usec): min=1491, max=51822, avg=17340.97, stdev=10106.70 00:09:04.955 lat (usec): min=1507, max=51833, avg=17462.33, stdev=10185.40 00:09:04.955 clat percentiles (usec): 00:09:04.955 | 1.00th=[ 4146], 5.00th=[ 7439], 10.00th=[ 9110], 20.00th=[11207], 00:09:04.955 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12649], 60.00th=[14222], 00:09:04.955 | 70.00th=[20841], 80.00th=[22152], 90.00th=[31589], 95.00th=[41157], 00:09:04.955 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:09:04.955 | 99.99th=[51643] 00:09:04.955 bw ( KiB/s): min=12272, max=20480, per=23.10%, avg=16376.00, stdev=5803.93, samples=2 00:09:04.956 iops : min= 3068, max= 5120, avg=4094.00, stdev=1450.98, samples=2 00:09:04.956 lat (msec) : 2=0.05%, 4=0.41%, 10=8.25%, 20=69.11%, 50=21.64% 00:09:04.956 lat (msec) : 100=0.54% 00:09:04.956 cpu : usr=2.88%, sys=5.26%, ctx=459, majf=0, minf=1 00:09:04.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:04.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.956 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.956 00:09:04.956 Run status group 0 (all jobs): 00:09:04.956 READ: bw=63.7MiB/s (66.8MB/s), 11.9MiB/s-22.0MiB/s (12.5MB/s-23.1MB/s), io=66.4MiB (69.7MB), run=1008-1043msec 00:09:04.956 WRITE: bw=69.2MiB/s (72.6MB/s), 12.1MiB/s-23.0MiB/s (12.7MB/s-24.1MB/s), io=72.2MiB (75.7MB), run=1008-1043msec 00:09:04.956 00:09:04.956 Disk stats (read/write): 00:09:04.956 nvme0n1: ios=2215/2560, merge=0/0, ticks=22552/22440, in_queue=44992, util=82.77% 00:09:04.956 nvme0n2: ios=5162/5135, merge=0/0, ticks=20992/20195, in_queue=41187, util=98.07% 00:09:04.956 nvme0n3: ios=3618/4223, merge=0/0, ticks=16649/30699, in_queue=47348, util=97.71% 00:09:04.956 nvme0n4: ios=3640/3647, merge=0/0, ticks=51971/51909, in_queue=103880, util=98.11% 00:09:04.956 09:41:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:04.956 [global] 00:09:04.956 thread=1 00:09:04.956 invalidate=1 00:09:04.956 rw=randwrite 00:09:04.956 time_based=1 00:09:04.956 runtime=1 00:09:04.956 ioengine=libaio 00:09:04.956 direct=1 00:09:04.956 bs=4096 00:09:04.956 iodepth=128 00:09:04.956 norandommap=0 00:09:04.956 numjobs=1 00:09:04.956 00:09:04.956 verify_dump=1 00:09:04.956 verify_backlog=512 00:09:04.956 verify_state_save=0 00:09:04.956 do_verify=1 00:09:04.956 verify=crc32c-intel 00:09:04.956 [job0] 00:09:04.956 filename=/dev/nvme0n1 00:09:04.956 [job1] 00:09:04.956 filename=/dev/nvme0n2 00:09:04.956 [job2] 00:09:04.956 filename=/dev/nvme0n3 00:09:04.956 [job3] 00:09:04.956 filename=/dev/nvme0n4 00:09:04.956 Could not set queue depth (nvme0n1) 00:09:04.956 Could not set queue depth (nvme0n2) 00:09:04.956 Could not set queue depth (nvme0n3) 00:09:04.956 Could not set queue depth (nvme0n4) 00:09:05.212 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.212 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.212 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.212 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.212 fio-3.35 00:09:05.212 Starting 4 threads 00:09:06.583 00:09:06.583 job0: (groupid=0, jobs=1): err= 0: pid=2801595: Wed Nov 20 09:41:29 2024 00:09:06.583 read: IOPS=5298, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1005msec) 00:09:06.583 slat (nsec): min=1140, max=14890k, avg=91761.39, stdev=626619.75 00:09:06.583 clat (usec): min=589, max=39458, avg=12172.50, stdev=4757.18 00:09:06.583 lat (usec): min=3359, max=39464, avg=12264.26, stdev=4784.55 00:09:06.583 clat percentiles (usec): 00:09:06.583 | 1.00th=[ 4113], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 9765], 00:09:06.583 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[11076], 00:09:06.583 | 70.00th=[12780], 80.00th=[14746], 90.00th=[17695], 95.00th=[22152], 00:09:06.583 | 99.00th=[31327], 99.50th=[32637], 99.90th=[39584], 99.95th=[39584], 00:09:06.583 | 99.99th=[39584] 00:09:06.583 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:09:06.583 slat (nsec): min=1829, max=17439k, avg=86061.13, stdev=545563.65 00:09:06.583 clat (usec): min=2861, max=39435, avg=10992.77, stdev=3781.53 00:09:06.583 lat (usec): min=2877, max=39442, avg=11078.83, stdev=3808.95 00:09:06.583 clat percentiles (usec): 00:09:06.583 | 1.00th=[ 3720], 5.00th=[ 6718], 10.00th=[ 8029], 20.00th=[ 9634], 00:09:06.583 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:09:06.583 | 70.00th=[10683], 80.00th=[12387], 90.00th=[14484], 95.00th=[17171], 00:09:06.583 | 99.00th=[29754], 99.50th=[33424], 99.90th=[33424], 99.95th=[34341], 00:09:06.583 | 99.99th=[39584] 00:09:06.583 bw ( KiB/s): min=20480, max=24526, per=31.42%, avg=22503.00, stdev=2860.95, samples=2 00:09:06.583 iops : min= 5120, max= 6131, avg=5625.50, stdev=714.88, samples=2 00:09:06.583 lat (usec) : 750=0.01% 00:09:06.583 lat (msec) : 4=1.44%, 10=25.98%, 20=67.41%, 50=5.16% 00:09:06.583 cpu : usr=3.98%, sys=4.48%, ctx=501, majf=0, minf=1 00:09:06.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:06.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.583 issued rwts: total=5325,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.583 job1: (groupid=0, jobs=1): err= 0: pid=2801598: Wed Nov 20 09:41:29 2024 00:09:06.583 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:09:06.583 slat (nsec): min=1086, max=12003k, avg=109013.58, stdev=734203.09 00:09:06.583 clat (usec): min=4905, max=33876, avg=14631.61, stdev=4688.63 00:09:06.583 lat (usec): min=4907, max=37316, avg=14740.63, stdev=4753.78 00:09:06.583 clat percentiles (usec): 00:09:06.583 | 1.00th=[ 5669], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10028], 00:09:06.583 | 30.00th=[11994], 40.00th=[13435], 50.00th=[14091], 60.00th=[15139], 00:09:06.583 | 70.00th=[15795], 80.00th=[17957], 90.00th=[21890], 95.00th=[24249], 00:09:06.584 | 99.00th=[27919], 99.50th=[28181], 99.90th=[31589], 99.95th=[33424], 00:09:06.584 | 99.99th=[33817] 00:09:06.584 write: IOPS=4130, BW=16.1MiB/s (16.9MB/s)(16.3MiB/1009msec); 0 zone resets 00:09:06.584 slat (nsec): min=1967, max=9283.1k, avg=121436.48, stdev=722307.49 00:09:06.584 clat (usec): min=2593, max=42490, avg=16175.62, stdev=7090.14 00:09:06.584 lat (usec): min=2597, max=42504, avg=16297.06, stdev=7155.78 00:09:06.584 clat percentiles (usec): 00:09:06.584 | 1.00th=[ 5342], 5.00th=[ 7439], 10.00th=[10159], 20.00th=[11863], 00:09:06.584 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13698], 60.00th=[14746], 00:09:06.584 | 70.00th=[17171], 80.00th=[20579], 90.00th=[26608], 95.00th=[32637], 00:09:06.584 | 99.00th=[39584], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:09:06.584 | 99.99th=[42730] 00:09:06.584 bw ( KiB/s): min=13880, max=18858, per=22.86%, avg=16369.00, stdev=3519.98, samples=2 00:09:06.584 iops : min= 3470, max= 4714, avg=4092.00, stdev=879.64, samples=2 00:09:06.584 lat (msec) : 4=0.01%, 10=14.22%, 20=68.22%, 50=17.55% 00:09:06.584 cpu : usr=3.77%, sys=5.56%, ctx=333, majf=0, minf=1 00:09:06.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:06.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.584 issued rwts: total=4096,4168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.584 job2: (groupid=0, jobs=1): err= 0: pid=2801600: Wed Nov 20 09:41:29 2024 00:09:06.584 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:09:06.584 slat (nsec): min=1342, max=12426k, avg=109121.40, stdev=763763.53 00:09:06.584 clat (usec): min=3923, max=59476, avg=13142.51, stdev=6192.29 00:09:06.584 lat (usec): min=3930, max=59482, avg=13251.63, stdev=6239.45 00:09:06.584 clat percentiles (usec): 00:09:06.584 | 1.00th=[ 5276], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10814], 00:09:06.584 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11863], 00:09:06.584 | 70.00th=[12387], 80.00th=[14615], 90.00th=[18220], 95.00th=[20317], 00:09:06.584 | 99.00th=[55313], 99.50th=[57410], 99.90th=[58459], 99.95th=[59507], 00:09:06.584 | 99.99th=[59507] 00:09:06.584 write: IOPS=5397, BW=21.1MiB/s (22.1MB/s)(21.3MiB/1012msec); 0 zone resets 00:09:06.584 slat (nsec): min=1986, max=9845.0k, avg=75685.92, stdev=332173.53 00:09:06.584 clat (usec): min=1445, max=59455, avg=11180.78, stdev=3770.70 00:09:06.584 lat (usec): min=1458, max=59458, avg=11256.46, stdev=3787.79 00:09:06.584 clat percentiles (usec): 00:09:06.584 | 1.00th=[ 3687], 5.00th=[ 5800], 10.00th=[ 7242], 20.00th=[10028], 00:09:06.584 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:09:06.584 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12911], 95.00th=[13566], 00:09:06.584 | 99.00th=[25035], 99.50th=[35390], 99.90th=[47973], 99.95th=[47973], 00:09:06.584 | 99.99th=[59507] 00:09:06.584 bw ( KiB/s): min=20806, max=21832, per=29.77%, avg=21319.00, stdev=725.49, samples=2 00:09:06.584 iops : min= 5201, max= 5458, avg=5329.50, stdev=181.73, samples=2 00:09:06.584 lat (msec) : 2=0.02%, 4=0.83%, 10=16.81%, 20=78.48%, 50=3.26% 00:09:06.584 lat (msec) : 100=0.60% 00:09:06.584 cpu : usr=3.76%, sys=5.64%, ctx=686, majf=0, minf=2 00:09:06.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:06.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.584 issued rwts: total=5120,5462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.584 job3: (groupid=0, jobs=1): err= 0: pid=2801602: Wed Nov 20 09:41:29 2024 00:09:06.584 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:09:06.584 slat (nsec): min=1230, max=22948k, avg=187961.78, stdev=1236319.26 00:09:06.584 clat (usec): min=5161, max=49891, avg=24256.14, stdev=11050.31 00:09:06.584 lat (usec): min=5873, max=52006, avg=24444.11, stdev=11155.21 00:09:06.584 clat percentiles (usec): 00:09:06.584 | 1.00th=[ 6521], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:09:06.584 | 30.00th=[14615], 40.00th=[20055], 50.00th=[23987], 60.00th=[29492], 00:09:06.584 | 70.00th=[32113], 80.00th=[34341], 90.00th=[38011], 95.00th=[41681], 00:09:06.584 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:09:06.584 | 99.99th=[50070] 00:09:06.584 write: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1004msec); 0 zone resets 00:09:06.584 slat (nsec): min=1916, max=24220k, avg=172653.44, stdev=993675.00 00:09:06.584 clat (usec): min=953, max=52695, avg=22722.19, stdev=12398.25 00:09:06.584 lat (usec): min=976, max=53118, avg=22894.85, stdev=12486.47 00:09:06.584 clat percentiles (usec): 00:09:06.584 | 1.00th=[ 6783], 5.00th=[ 7767], 10.00th=[10290], 20.00th=[11469], 00:09:06.584 | 30.00th=[13698], 40.00th=[16909], 50.00th=[21365], 60.00th=[22676], 00:09:06.584 | 70.00th=[25560], 80.00th=[30540], 90.00th=[45876], 95.00th=[48497], 00:09:06.584 | 99.00th=[52167], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:09:06.584 | 99.99th=[52691] 00:09:06.584 bw ( KiB/s): min= 8192, max=13624, per=15.23%, avg=10908.00, stdev=3841.00, samples=2 00:09:06.584 iops : min= 2048, max= 3406, avg=2727.00, stdev=960.25, samples=2 00:09:06.584 lat (usec) : 1000=0.04% 00:09:06.584 lat (msec) : 2=0.02%, 10=6.17%, 20=35.57%, 50=56.49%, 100=1.72% 00:09:06.584 cpu : usr=1.50%, sys=3.29%, ctx=283, majf=0, minf=1 00:09:06.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:06.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.584 issued rwts: total=2560,2855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.584 00:09:06.584 Run status group 0 (all jobs): 00:09:06.584 READ: bw=66.0MiB/s (69.2MB/s), 9.96MiB/s-20.7MiB/s (10.4MB/s-21.7MB/s), io=66.8MiB (70.0MB), run=1004-1012msec 00:09:06.584 WRITE: bw=69.9MiB/s (73.3MB/s), 11.1MiB/s-21.9MiB/s (11.6MB/s-23.0MB/s), io=70.8MiB (74.2MB), run=1004-1012msec 00:09:06.584 00:09:06.584 Disk stats (read/write): 00:09:06.584 nvme0n1: ios=4701/5118, merge=0/0, ticks=20747/22405, in_queue=43152, util=97.29% 00:09:06.584 nvme0n2: ios=3603/3663, merge=0/0, ticks=31691/24957, in_queue=56648, util=98.17% 00:09:06.584 nvme0n3: ios=4347/4608, merge=0/0, ticks=55406/49586, in_queue=104992, util=88.96% 00:09:06.584 nvme0n4: ios=1783/2048, merge=0/0, ticks=17693/19778, in_queue=37471, util=89.61% 00:09:06.584 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:06.584 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2801722 00:09:06.584 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:06.584 09:41:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:06.584 [global] 00:09:06.584 thread=1 00:09:06.584 invalidate=1 00:09:06.584 rw=read 00:09:06.584 time_based=1 00:09:06.584 runtime=10 00:09:06.584 ioengine=libaio 00:09:06.584 direct=1 00:09:06.584 bs=4096 00:09:06.584 iodepth=1 00:09:06.584 norandommap=1 00:09:06.584 numjobs=1 00:09:06.584 00:09:06.584 [job0] 00:09:06.584 filename=/dev/nvme0n1 00:09:06.584 [job1] 00:09:06.584 filename=/dev/nvme0n2 00:09:06.584 [job2] 00:09:06.584 filename=/dev/nvme0n3 00:09:06.584 [job3] 00:09:06.584 filename=/dev/nvme0n4 00:09:06.584 Could not set queue depth (nvme0n1) 00:09:06.584 Could not set queue depth (nvme0n2) 00:09:06.584 Could not set queue depth (nvme0n3) 00:09:06.584 Could not set queue depth (nvme0n4) 00:09:06.841 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.841 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.841 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.842 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.842 fio-3.35 00:09:06.842 Starting 4 threads 00:09:09.366 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:09.624 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46497792, buflen=4096 00:09:09.624 fio: pid=2801977, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:09.624 09:41:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:09.881 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=50233344, buflen=4096 00:09:09.881 fio: pid=2801976, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:09.881 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:09.881 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:10.139 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.139 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:10.139 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3858432, buflen=4096 00:09:10.139 fio: pid=2801974, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.397 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.397 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:10.397 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=352256, buflen=4096 00:09:10.397 fio: pid=2801975, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:10.397 00:09:10.397 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2801974: Wed Nov 20 09:41:33 2024 00:09:10.397 read: IOPS=297, BW=1189KiB/s (1218kB/s)(3768KiB/3169msec) 00:09:10.397 slat (usec): min=6, max=711, avg= 9.68, stdev=23.46 00:09:10.397 clat (usec): min=180, max=42041, avg=3343.69, stdev=10859.16 00:09:10.397 lat (usec): min=187, max=42063, avg=3353.36, stdev=10865.64 00:09:10.397 clat percentiles (usec): 00:09:10.397 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:09:10.397 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:09:10.397 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 265], 95.00th=[41157], 00:09:10.397 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.397 | 99.99th=[42206] 00:09:10.397 bw ( KiB/s): min= 93, max= 5168, per=4.28%, avg=1250.17, stdev=2056.59, samples=6 00:09:10.397 iops : min= 23, max= 1292, avg=312.50, stdev=514.17, samples=6 00:09:10.397 lat (usec) : 250=87.27%, 500=4.88%, 750=0.11% 00:09:10.397 lat (msec) : 50=7.64% 00:09:10.397 cpu : usr=0.09%, sys=0.32%, ctx=946, majf=0, minf=1 00:09:10.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.397 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.397 issued rwts: total=943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.397 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2801975: Wed Nov 20 09:41:33 2024 00:09:10.397 read: IOPS=25, BW=102KiB/s (104kB/s)(344KiB/3375msec) 00:09:10.397 slat (usec): min=6, max=6752, avg=97.17, stdev=721.91 00:09:10.397 clat (usec): min=251, max=42017, avg=39126.67, stdev=8627.43 00:09:10.397 lat (usec): min=262, max=42030, avg=39146.45, stdev=8628.81 00:09:10.397 clat percentiles (usec): 00:09:10.397 | 1.00th=[ 251], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:10.397 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.397 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:10.397 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.397 | 99.99th=[42206] 00:09:10.397 bw ( KiB/s): min= 93, max= 112, per=0.35%, avg=102.17, stdev= 8.45, samples=6 00:09:10.397 iops : min= 23, max= 28, avg=25.50, stdev= 2.17, samples=6 00:09:10.397 lat (usec) : 500=4.60% 00:09:10.397 lat (msec) : 50=94.25% 00:09:10.397 cpu : usr=0.00%, sys=0.27%, ctx=90, majf=0, minf=2 00:09:10.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.397 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.397 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.397 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2801976: Wed Nov 20 09:41:33 2024 00:09:10.397 read: IOPS=4170, BW=16.3MiB/s (17.1MB/s)(47.9MiB/2941msec) 00:09:10.397 slat (usec): min=5, max=10862, avg= 9.22, stdev=119.84 00:09:10.397 clat (usec): min=160, max=41667, avg=227.54, stdev=642.56 00:09:10.397 lat (usec): min=167, max=41674, avg=236.76, stdev=653.95 00:09:10.397 clat percentiles (usec): 00:09:10.397 | 1.00th=[ 172], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:09:10.397 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:09:10.397 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 253], 00:09:10.397 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 486], 99.95th=[ 553], 00:09:10.397 | 99.99th=[41157] 00:09:10.397 bw ( KiB/s): min=14152, max=18648, per=56.48%, avg=16496.00, stdev=1877.26, samples=5 00:09:10.397 iops : min= 3538, max= 4662, avg=4124.00, stdev=469.31, samples=5 00:09:10.397 lat (usec) : 250=93.66%, 500=6.25%, 750=0.04% 00:09:10.397 lat (msec) : 4=0.02%, 50=0.02% 00:09:10.397 cpu : usr=1.19%, sys=5.14%, ctx=12267, majf=0, minf=2 00:09:10.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.397 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.397 issued rwts: total=12265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.397 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2801977: Wed Nov 20 09:41:33 2024 00:09:10.397 read: IOPS=4155, BW=16.2MiB/s (17.0MB/s)(44.3MiB/2732msec) 00:09:10.397 slat (nsec): min=5966, max=42509, avg=8525.44, stdev=1830.13 00:09:10.397 clat (usec): min=164, max=40652, avg=228.76, stdev=380.15 00:09:10.397 lat (usec): min=171, max=40660, avg=237.28, stdev=380.19 00:09:10.397 clat percentiles (usec): 00:09:10.397 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:09:10.397 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 233], 00:09:10.397 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 262], 00:09:10.397 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 318], 99.95th=[ 453], 00:09:10.397 | 99.99th=[ 506] 00:09:10.397 bw ( KiB/s): min=14880, max=18416, per=56.73%, avg=16569.60, stdev=1582.84, samples=5 00:09:10.397 iops : min= 3720, max= 4604, avg=4142.40, stdev=395.71, samples=5 00:09:10.397 lat (usec) : 250=84.15%, 500=15.82%, 750=0.01% 00:09:10.397 lat (msec) : 50=0.01% 00:09:10.397 cpu : usr=1.61%, sys=5.42%, ctx=11357, majf=0, minf=2 00:09:10.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.397 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.397 issued rwts: total=11353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.397 00:09:10.397 Run status group 0 (all jobs): 00:09:10.397 READ: bw=28.5MiB/s (29.9MB/s), 102KiB/s-16.3MiB/s (104kB/s-17.1MB/s), io=96.3MiB (101MB), run=2732-3375msec 00:09:10.397 00:09:10.397 Disk stats (read/write): 00:09:10.397 nvme0n1: ios=940/0, merge=0/0, ticks=3061/0, in_queue=3061, util=95.72% 00:09:10.397 nvme0n2: ios=115/0, merge=0/0, ticks=3825/0, in_queue=3825, util=100.00% 00:09:10.397 nvme0n3: ios=11930/0, merge=0/0, ticks=2629/0, in_queue=2629, util=95.94% 00:09:10.397 nvme0n4: ios=10885/0, merge=0/0, ticks=3415/0, in_queue=3415, util=98.70% 00:09:10.654 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.654 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:10.654 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.654 09:41:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:10.912 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.912 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:11.170 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.170 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2801722 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:11.429 nvmf hotplug test: fio failed as expected 00:09:11.429 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.687 rmmod nvme_tcp 00:09:11.687 rmmod nvme_fabrics 00:09:11.687 rmmod nvme_keyring 00:09:11.687 09:41:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.687 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:11.687 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:11.687 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2798910 ']' 00:09:11.687 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2798910 00:09:11.687 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2798910 ']' 00:09:11.687 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2798910 00:09:11.687 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:11.687 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.687 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2798910 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2798910' 00:09:11.948 killing process with pid 2798910 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2798910 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2798910 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.948 09:41:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.484 00:09:14.484 real 0m26.912s 00:09:14.484 user 1m46.544s 00:09:14.484 sys 0m8.786s 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.484 ************************************ 00:09:14.484 END TEST nvmf_fio_target 00:09:14.484 ************************************ 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.484 ************************************ 00:09:14.484 START TEST nvmf_bdevio 00:09:14.484 ************************************ 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.484 * Looking for test storage... 00:09:14.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # lcov --version 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:09:14.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.484 --rc genhtml_branch_coverage=1 00:09:14.484 --rc genhtml_function_coverage=1 00:09:14.484 --rc genhtml_legend=1 00:09:14.484 --rc geninfo_all_blocks=1 00:09:14.484 --rc geninfo_unexecuted_blocks=1 00:09:14.484 00:09:14.484 ' 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:09:14.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.484 --rc genhtml_branch_coverage=1 00:09:14.484 --rc genhtml_function_coverage=1 00:09:14.484 --rc genhtml_legend=1 00:09:14.484 --rc geninfo_all_blocks=1 00:09:14.484 --rc geninfo_unexecuted_blocks=1 00:09:14.484 00:09:14.484 ' 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:09:14.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.484 --rc genhtml_branch_coverage=1 00:09:14.484 --rc genhtml_function_coverage=1 00:09:14.484 --rc genhtml_legend=1 00:09:14.484 --rc geninfo_all_blocks=1 00:09:14.484 --rc geninfo_unexecuted_blocks=1 00:09:14.484 00:09:14.484 ' 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:09:14.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.484 --rc genhtml_branch_coverage=1 00:09:14.484 --rc genhtml_function_coverage=1 00:09:14.484 --rc genhtml_legend=1 00:09:14.484 --rc geninfo_all_blocks=1 00:09:14.484 --rc geninfo_unexecuted_blocks=1 00:09:14.484 00:09:14.484 ' 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.484 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.485 09:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:21.053 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:21.053 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:21.053 Found net devices under 0000:86:00.0: cvl_0_0 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:21.053 Found net devices under 0000:86:00.1: cvl_0_1 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.053 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:09:21.054 00:09:21.054 --- 10.0.0.2 ping statistics --- 00:09:21.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.054 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:09:21.054 00:09:21.054 --- 10.0.0.1 ping statistics --- 00:09:21.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.054 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2806227 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2806227 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2806227 ']' 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.054 [2024-11-20 09:41:43.630482] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:09:21.054 [2024-11-20 09:41:43.630532] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.054 [2024-11-20 09:41:43.709476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.054 [2024-11-20 09:41:43.752366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.054 [2024-11-20 09:41:43.752406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.054 [2024-11-20 09:41:43.752414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.054 [2024-11-20 09:41:43.752420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.054 [2024-11-20 09:41:43.752425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.054 [2024-11-20 09:41:43.753942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:21.054 [2024-11-20 09:41:43.754033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:21.054 [2024-11-20 09:41:43.754069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.054 [2024-11-20 09:41:43.754069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.054 [2024-11-20 09:41:43.890776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.054 Malloc0 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.054 [2024-11-20 09:41:43.954382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:21.054 { 00:09:21.054 "params": { 00:09:21.054 "name": "Nvme$subsystem", 00:09:21.054 "trtype": "$TEST_TRANSPORT", 00:09:21.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:21.054 "adrfam": "ipv4", 00:09:21.054 "trsvcid": "$NVMF_PORT", 00:09:21.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:21.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:21.054 "hdgst": ${hdgst:-false}, 00:09:21.054 "ddgst": ${ddgst:-false} 00:09:21.054 }, 00:09:21.054 "method": "bdev_nvme_attach_controller" 00:09:21.054 } 00:09:21.054 EOF 00:09:21.054 )") 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:21.054 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:21.055 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:21.055 09:41:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:21.055 "params": { 00:09:21.055 "name": "Nvme1", 00:09:21.055 "trtype": "tcp", 00:09:21.055 "traddr": "10.0.0.2", 00:09:21.055 "adrfam": "ipv4", 00:09:21.055 "trsvcid": "4420", 00:09:21.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:21.055 "hdgst": false, 00:09:21.055 "ddgst": false 00:09:21.055 }, 00:09:21.055 "method": "bdev_nvme_attach_controller" 00:09:21.055 }' 00:09:21.055 [2024-11-20 09:41:44.003426] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:09:21.055 [2024-11-20 09:41:44.003471] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806399 ] 00:09:21.055 [2024-11-20 09:41:44.094758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:21.055 [2024-11-20 09:41:44.139057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.055 [2024-11-20 09:41:44.139093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.055 [2024-11-20 09:41:44.139093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.055 I/O targets: 00:09:21.055 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:21.055 00:09:21.055 00:09:21.055 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.055 http://cunit.sourceforge.net/ 00:09:21.055 00:09:21.055 00:09:21.055 Suite: bdevio tests on: Nvme1n1 00:09:21.055 Test: blockdev write read block ...passed 00:09:21.313 Test: blockdev write zeroes read block ...passed 00:09:21.313 Test: blockdev write zeroes read no split ...passed 00:09:21.313 Test: blockdev write zeroes read split ...passed 00:09:21.313 Test: blockdev write zeroes read split partial ...passed 00:09:21.313 Test: blockdev reset ...[2024-11-20 09:41:44.417091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:21.313 [2024-11-20 09:41:44.417165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ed340 (9): Bad file descriptor 00:09:21.313 [2024-11-20 09:41:44.435936] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:21.313 passed 00:09:21.313 Test: blockdev write read 8 blocks ...passed 00:09:21.313 Test: blockdev write read size > 128k ...passed 00:09:21.313 Test: blockdev write read invalid size ...passed 00:09:21.313 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:21.313 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:21.313 Test: blockdev write read max offset ...passed 00:09:21.313 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:21.313 Test: blockdev writev readv 8 blocks ...passed 00:09:21.313 Test: blockdev writev readv 30 x 1block ...passed 00:09:21.571 Test: blockdev writev readv block ...passed 00:09:21.571 Test: blockdev writev readv size > 128k ...passed 00:09:21.571 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:21.571 Test: blockdev comparev and writev ...[2024-11-20 09:41:44.646903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.571 [2024-11-20 09:41:44.646931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.646951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.571 [2024-11-20 09:41:44.646960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.647205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.571 [2024-11-20 09:41:44.647217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.647229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.571 [2024-11-20 09:41:44.647236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.647471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.571 [2024-11-20 09:41:44.647481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.647493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.571 [2024-11-20 09:41:44.647500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.647708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.571 [2024-11-20 09:41:44.647719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.647730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:21.571 [2024-11-20 09:41:44.647737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:21.571 passed 00:09:21.571 Test: blockdev nvme passthru rw ...passed 00:09:21.571 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:41:44.730232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:21.571 [2024-11-20 09:41:44.730250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.730358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:21.571 [2024-11-20 09:41:44.730368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.730475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:21.571 [2024-11-20 09:41:44.730486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:21.571 [2024-11-20 09:41:44.730595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:21.571 [2024-11-20 09:41:44.730605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:21.571 passed 00:09:21.571 Test: blockdev nvme admin passthru ...passed 00:09:21.571 Test: blockdev copy ...passed 00:09:21.571 00:09:21.571 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.571 suites 1 1 n/a 0 0 00:09:21.571 tests 23 23 23 0 0 00:09:21.571 asserts 152 152 152 0 n/a 00:09:21.571 00:09:21.571 Elapsed time = 0.992 seconds 00:09:21.829 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.829 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.829 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.830 rmmod nvme_tcp 00:09:21.830 rmmod nvme_fabrics 00:09:21.830 rmmod nvme_keyring 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2806227 ']' 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2806227 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2806227 ']' 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2806227 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.830 09:41:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806227 00:09:21.830 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:21.830 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:21.830 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806227' 00:09:21.830 killing process with pid 2806227 00:09:21.830 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2806227 00:09:21.830 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2806227 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.141 09:41:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.136 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.136 00:09:24.136 real 0m9.929s 00:09:24.136 user 0m9.435s 00:09:24.136 sys 0m4.993s 00:09:24.136 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.136 09:41:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.136 ************************************ 00:09:24.136 END TEST nvmf_bdevio 00:09:24.136 ************************************ 00:09:24.136 09:41:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:24.136 00:09:24.136 real 4m36.693s 00:09:24.136 user 10m20.453s 00:09:24.136 sys 1m38.317s 00:09:24.136 09:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.136 09:41:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.136 ************************************ 00:09:24.136 END TEST nvmf_target_core 00:09:24.136 ************************************ 00:09:24.136 09:41:47 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:24.136 09:41:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.136 09:41:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.136 09:41:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.136 ************************************ 00:09:24.136 START TEST nvmf_target_extra 00:09:24.136 ************************************ 00:09:24.136 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:24.415 * Looking for test storage... 00:09:24.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1703 -- # lcov --version 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.415 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:09:24.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.416 --rc genhtml_branch_coverage=1 00:09:24.416 --rc genhtml_function_coverage=1 00:09:24.416 --rc genhtml_legend=1 00:09:24.416 --rc geninfo_all_blocks=1 00:09:24.416 --rc geninfo_unexecuted_blocks=1 00:09:24.416 00:09:24.416 ' 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:09:24.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.416 --rc genhtml_branch_coverage=1 00:09:24.416 --rc genhtml_function_coverage=1 00:09:24.416 --rc genhtml_legend=1 00:09:24.416 --rc geninfo_all_blocks=1 00:09:24.416 --rc geninfo_unexecuted_blocks=1 00:09:24.416 00:09:24.416 ' 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:09:24.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.416 --rc genhtml_branch_coverage=1 00:09:24.416 --rc genhtml_function_coverage=1 00:09:24.416 --rc genhtml_legend=1 00:09:24.416 --rc geninfo_all_blocks=1 00:09:24.416 --rc geninfo_unexecuted_blocks=1 00:09:24.416 00:09:24.416 ' 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:09:24.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.416 --rc genhtml_branch_coverage=1 00:09:24.416 --rc genhtml_function_coverage=1 00:09:24.416 --rc genhtml_legend=1 00:09:24.416 --rc geninfo_all_blocks=1 00:09:24.416 --rc geninfo_unexecuted_blocks=1 00:09:24.416 00:09:24.416 ' 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.416 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:24.417 ************************************ 00:09:24.417 START TEST nvmf_example 00:09:24.417 ************************************ 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:24.417 * Looking for test storage... 00:09:24.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.417 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # lcov --version 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:09:24.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.677 --rc genhtml_branch_coverage=1 00:09:24.677 --rc genhtml_function_coverage=1 00:09:24.677 --rc genhtml_legend=1 00:09:24.677 --rc geninfo_all_blocks=1 00:09:24.677 --rc geninfo_unexecuted_blocks=1 00:09:24.677 00:09:24.677 ' 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:09:24.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.677 --rc genhtml_branch_coverage=1 00:09:24.677 --rc genhtml_function_coverage=1 00:09:24.677 --rc genhtml_legend=1 00:09:24.677 --rc geninfo_all_blocks=1 00:09:24.677 --rc geninfo_unexecuted_blocks=1 00:09:24.677 00:09:24.677 ' 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:09:24.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.677 --rc genhtml_branch_coverage=1 00:09:24.677 --rc genhtml_function_coverage=1 00:09:24.677 --rc genhtml_legend=1 00:09:24.677 --rc geninfo_all_blocks=1 00:09:24.677 --rc geninfo_unexecuted_blocks=1 00:09:24.677 00:09:24.677 ' 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:09:24.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.677 --rc genhtml_branch_coverage=1 00:09:24.677 --rc genhtml_function_coverage=1 00:09:24.677 --rc genhtml_legend=1 00:09:24.677 --rc geninfo_all_blocks=1 00:09:24.677 --rc geninfo_unexecuted_blocks=1 00:09:24.677 00:09:24.677 ' 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.677 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.678 09:41:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:31.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:31.244 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:31.244 Found net devices under 0000:86:00.0: cvl_0_0 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:31.244 Found net devices under 0000:86:00.1: cvl_0_1 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:09:31.244 00:09:31.244 --- 10.0.0.2 ping statistics --- 00:09:31.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.244 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:09:31.244 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:09:31.244 00:09:31.244 --- 10.0.0.1 ping statistics --- 00:09:31.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.245 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2810184 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2810184 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2810184 ']' 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.245 09:41:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.503 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:31.761 09:41:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:43.969 Initializing NVMe Controllers 00:09:43.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:43.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:43.969 Initialization complete. Launching workers. 00:09:43.970 ======================================================== 00:09:43.970 Latency(us) 00:09:43.970 Device Information : IOPS MiB/s Average min max 00:09:43.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18110.63 70.74 3533.24 627.29 15382.64 00:09:43.970 ======================================================== 00:09:43.970 Total : 18110.63 70.74 3533.24 627.29 15382.64 00:09:43.970 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.970 rmmod nvme_tcp 00:09:43.970 rmmod nvme_fabrics 00:09:43.970 rmmod nvme_keyring 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2810184 ']' 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2810184 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2810184 ']' 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2810184 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2810184 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2810184' 00:09:43.970 killing process with pid 2810184 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2810184 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2810184 00:09:43.970 nvmf threads initialize successfully 00:09:43.970 bdev subsystem init successfully 00:09:43.970 created a nvmf target service 00:09:43.970 create targets's poll groups done 00:09:43.970 all subsystems of target started 00:09:43.970 nvmf target is running 00:09:43.970 all subsystems of target stopped 00:09:43.970 destroy targets's poll groups done 00:09:43.970 destroyed the nvmf target service 00:09:43.970 bdev subsystem finish successfully 00:09:43.970 nvmf threads destroy successfully 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.970 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.538 00:09:44.538 real 0m19.969s 00:09:44.538 user 0m46.661s 00:09:44.538 sys 0m5.973s 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.538 ************************************ 00:09:44.538 END TEST nvmf_example 00:09:44.538 ************************************ 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:44.538 ************************************ 00:09:44.538 START TEST nvmf_filesystem 00:09:44.538 ************************************ 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:44.538 * Looking for test storage... 00:09:44.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # lcov --version 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.538 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:09:44.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.801 --rc genhtml_branch_coverage=1 00:09:44.801 --rc genhtml_function_coverage=1 00:09:44.801 --rc genhtml_legend=1 00:09:44.801 --rc geninfo_all_blocks=1 00:09:44.801 --rc geninfo_unexecuted_blocks=1 00:09:44.801 00:09:44.801 ' 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:09:44.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.801 --rc genhtml_branch_coverage=1 00:09:44.801 --rc genhtml_function_coverage=1 00:09:44.801 --rc genhtml_legend=1 00:09:44.801 --rc geninfo_all_blocks=1 00:09:44.801 --rc geninfo_unexecuted_blocks=1 00:09:44.801 00:09:44.801 ' 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:09:44.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.801 --rc genhtml_branch_coverage=1 00:09:44.801 --rc genhtml_function_coverage=1 00:09:44.801 --rc genhtml_legend=1 00:09:44.801 --rc geninfo_all_blocks=1 00:09:44.801 --rc geninfo_unexecuted_blocks=1 00:09:44.801 00:09:44.801 ' 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:09:44.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.801 --rc genhtml_branch_coverage=1 00:09:44.801 --rc genhtml_function_coverage=1 00:09:44.801 --rc genhtml_legend=1 00:09:44.801 --rc geninfo_all_blocks=1 00:09:44.801 --rc geninfo_unexecuted_blocks=1 00:09:44.801 00:09:44.801 ' 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:44.801 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:44.802 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:44.802 #define SPDK_CONFIG_H 00:09:44.802 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:44.802 #define SPDK_CONFIG_APPS 1 00:09:44.802 #define SPDK_CONFIG_ARCH native 00:09:44.802 #undef SPDK_CONFIG_ASAN 00:09:44.802 #undef SPDK_CONFIG_AVAHI 00:09:44.802 #undef SPDK_CONFIG_CET 00:09:44.802 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:44.802 #define SPDK_CONFIG_COVERAGE 1 00:09:44.802 #define SPDK_CONFIG_CROSS_PREFIX 00:09:44.802 #undef SPDK_CONFIG_CRYPTO 00:09:44.802 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:44.802 #undef SPDK_CONFIG_CUSTOMOCF 00:09:44.802 #undef SPDK_CONFIG_DAOS 00:09:44.802 #define SPDK_CONFIG_DAOS_DIR 00:09:44.802 #define SPDK_CONFIG_DEBUG 1 00:09:44.802 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:44.802 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:44.802 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:44.802 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:44.802 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:44.802 #undef SPDK_CONFIG_DPDK_UADK 00:09:44.802 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:44.802 #define SPDK_CONFIG_EXAMPLES 1 00:09:44.802 #undef SPDK_CONFIG_FC 00:09:44.802 #define SPDK_CONFIG_FC_PATH 00:09:44.802 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:44.802 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:44.802 #define SPDK_CONFIG_FSDEV 1 00:09:44.802 #undef SPDK_CONFIG_FUSE 00:09:44.802 #undef SPDK_CONFIG_FUZZER 00:09:44.802 #define SPDK_CONFIG_FUZZER_LIB 00:09:44.802 #undef SPDK_CONFIG_GOLANG 00:09:44.802 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:44.802 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:44.802 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:44.802 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:44.802 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:44.802 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:44.802 #undef SPDK_CONFIG_HAVE_LZ4 00:09:44.802 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:44.802 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:44.802 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:44.802 #define SPDK_CONFIG_IDXD 1 00:09:44.802 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:44.802 #undef SPDK_CONFIG_IPSEC_MB 00:09:44.802 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:44.802 #define SPDK_CONFIG_ISAL 1 00:09:44.802 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:44.802 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:44.802 #define SPDK_CONFIG_LIBDIR 00:09:44.802 #undef SPDK_CONFIG_LTO 00:09:44.802 #define SPDK_CONFIG_MAX_LCORES 128 00:09:44.802 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:44.802 #define SPDK_CONFIG_NVME_CUSE 1 00:09:44.802 #undef SPDK_CONFIG_OCF 00:09:44.803 #define SPDK_CONFIG_OCF_PATH 00:09:44.803 #define SPDK_CONFIG_OPENSSL_PATH 00:09:44.803 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:44.803 #define SPDK_CONFIG_PGO_DIR 00:09:44.803 #undef SPDK_CONFIG_PGO_USE 00:09:44.803 #define SPDK_CONFIG_PREFIX /usr/local 00:09:44.803 #undef SPDK_CONFIG_RAID5F 00:09:44.803 #undef SPDK_CONFIG_RBD 00:09:44.803 #define SPDK_CONFIG_RDMA 1 00:09:44.803 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:44.803 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:44.803 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:44.803 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:44.803 #define SPDK_CONFIG_SHARED 1 00:09:44.803 #undef SPDK_CONFIG_SMA 00:09:44.803 #define SPDK_CONFIG_TESTS 1 00:09:44.803 #undef SPDK_CONFIG_TSAN 00:09:44.803 #define SPDK_CONFIG_UBLK 1 00:09:44.803 #define SPDK_CONFIG_UBSAN 1 00:09:44.803 #undef SPDK_CONFIG_UNIT_TESTS 00:09:44.803 #undef SPDK_CONFIG_URING 00:09:44.803 #define SPDK_CONFIG_URING_PATH 00:09:44.803 #undef SPDK_CONFIG_URING_ZNS 00:09:44.803 #undef SPDK_CONFIG_USDT 00:09:44.803 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:44.803 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:44.803 #define SPDK_CONFIG_VFIO_USER 1 00:09:44.803 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:44.803 #define SPDK_CONFIG_VHOST 1 00:09:44.803 #define SPDK_CONFIG_VIRTIO 1 00:09:44.803 #undef SPDK_CONFIG_VTUNE 00:09:44.803 #define SPDK_CONFIG_VTUNE_DIR 00:09:44.803 #define SPDK_CONFIG_WERROR 1 00:09:44.803 #define SPDK_CONFIG_WPDK_DIR 00:09:44.803 #undef SPDK_CONFIG_XNVME 00:09:44.803 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:44.803 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:44.804 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2812660 ]] 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2812660 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # set_test_storage 2147483648 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:44.805 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:44.806 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:44.806 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:44.806 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:44.806 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.XP2ozq 00:09:44.806 09:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.XP2ozq/tests/target /tmp/spdk.XP2ozq 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189163307008 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6800654336 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971949568 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981448192 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=532480 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:44.806 * Looking for test storage... 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189163307008 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9015246848 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # set -o errtrace 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # shopt -s extdebug 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # true 00:09:44.806 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1697 -- # xtrace_fd 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # lcov --version 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:09:44.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.807 --rc genhtml_branch_coverage=1 00:09:44.807 --rc genhtml_function_coverage=1 00:09:44.807 --rc genhtml_legend=1 00:09:44.807 --rc geninfo_all_blocks=1 00:09:44.807 --rc geninfo_unexecuted_blocks=1 00:09:44.807 00:09:44.807 ' 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:09:44.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.807 --rc genhtml_branch_coverage=1 00:09:44.807 --rc genhtml_function_coverage=1 00:09:44.807 --rc genhtml_legend=1 00:09:44.807 --rc geninfo_all_blocks=1 00:09:44.807 --rc geninfo_unexecuted_blocks=1 00:09:44.807 00:09:44.807 ' 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:09:44.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.807 --rc genhtml_branch_coverage=1 00:09:44.807 --rc genhtml_function_coverage=1 00:09:44.807 --rc genhtml_legend=1 00:09:44.807 --rc geninfo_all_blocks=1 00:09:44.807 --rc geninfo_unexecuted_blocks=1 00:09:44.807 00:09:44.807 ' 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:09:44.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.807 --rc genhtml_branch_coverage=1 00:09:44.807 --rc genhtml_function_coverage=1 00:09:44.807 --rc genhtml_legend=1 00:09:44.807 --rc geninfo_all_blocks=1 00:09:44.807 --rc geninfo_unexecuted_blocks=1 00:09:44.807 00:09:44.807 ' 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.807 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:45.067 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.068 09:42:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.643 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.643 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.644 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.644 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.644 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.644 09:42:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:09:51.644 00:09:51.644 --- 10.0.0.2 ping statistics --- 00:09:51.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.644 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:09:51.644 00:09:51.644 --- 10.0.0.1 ping statistics --- 00:09:51.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.644 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.644 ************************************ 00:09:51.644 START TEST nvmf_filesystem_no_in_capsule 00:09:51.644 ************************************ 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2816265 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2816265 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2816265 ']' 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.644 09:42:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.644 [2024-11-20 09:42:14.307284] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:09:51.644 [2024-11-20 09:42:14.307328] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.645 [2024-11-20 09:42:14.389697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.645 [2024-11-20 09:42:14.433012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.645 [2024-11-20 09:42:14.433050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.645 [2024-11-20 09:42:14.433057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.645 [2024-11-20 09:42:14.433064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.645 [2024-11-20 09:42:14.433070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.645 [2024-11-20 09:42:14.434576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.645 [2024-11-20 09:42:14.434688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.645 [2024-11-20 09:42:14.434792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.645 [2024-11-20 09:42:14.434793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.904 [2024-11-20 09:42:15.198043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.904 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.163 Malloc1 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.163 [2024-11-20 09:42:15.344302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.163 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:52.163 { 00:09:52.163 "name": "Malloc1", 00:09:52.163 "aliases": [ 00:09:52.163 "76be89c2-ff32-452d-8d86-34cdd1dfb9d3" 00:09:52.163 ], 00:09:52.163 "product_name": "Malloc disk", 00:09:52.163 "block_size": 512, 00:09:52.163 "num_blocks": 1048576, 00:09:52.163 "uuid": "76be89c2-ff32-452d-8d86-34cdd1dfb9d3", 00:09:52.163 "assigned_rate_limits": { 00:09:52.163 "rw_ios_per_sec": 0, 00:09:52.164 "rw_mbytes_per_sec": 0, 00:09:52.164 "r_mbytes_per_sec": 0, 00:09:52.164 "w_mbytes_per_sec": 0 00:09:52.164 }, 00:09:52.164 "claimed": true, 00:09:52.164 "claim_type": "exclusive_write", 00:09:52.164 "zoned": false, 00:09:52.164 "supported_io_types": { 00:09:52.164 "read": true, 00:09:52.164 "write": true, 00:09:52.164 "unmap": true, 00:09:52.164 "flush": true, 00:09:52.164 "reset": true, 00:09:52.164 "nvme_admin": false, 00:09:52.164 "nvme_io": false, 00:09:52.164 "nvme_io_md": false, 00:09:52.164 "write_zeroes": true, 00:09:52.164 "zcopy": true, 00:09:52.164 "get_zone_info": false, 00:09:52.164 "zone_management": false, 00:09:52.164 "zone_append": false, 00:09:52.164 "compare": false, 00:09:52.164 "compare_and_write": false, 00:09:52.164 "abort": true, 00:09:52.164 "seek_hole": false, 00:09:52.164 "seek_data": false, 00:09:52.164 "copy": true, 00:09:52.164 "nvme_iov_md": false 00:09:52.164 }, 00:09:52.164 "memory_domains": [ 00:09:52.164 { 00:09:52.164 "dma_device_id": "system", 00:09:52.164 "dma_device_type": 1 00:09:52.164 }, 00:09:52.164 { 00:09:52.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.164 "dma_device_type": 2 00:09:52.164 } 00:09:52.164 ], 00:09:52.164 "driver_specific": {} 00:09:52.164 } 00:09:52.164 ]' 00:09:52.164 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:52.164 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:52.164 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:52.164 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:52.164 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:52.164 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:52.164 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:52.164 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:53.542 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.542 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:53.542 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.542 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:53.542 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:55.448 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:55.707 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:56.275 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.212 ************************************ 00:09:57.212 START TEST filesystem_ext4 00:09:57.212 ************************************ 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:57.212 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:57.213 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:57.213 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:57.213 mke2fs 1.47.0 (5-Feb-2023) 00:09:57.471 Discarding device blocks: 0/522240 done 00:09:57.471 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:57.471 Filesystem UUID: 5d6e2c42-4b83-447d-b4aa-028950587929 00:09:57.471 Superblock backups stored on blocks: 00:09:57.471 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:57.471 00:09:57.471 Allocating group tables: 0/64 done 00:09:57.471 Writing inode tables: 0/64 done 00:10:00.295 Creating journal (8192 blocks): done 00:10:02.451 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:10:02.451 00:10:02.451 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:02.451 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2816265 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:09.018 00:10:09.018 real 0m11.327s 00:10:09.018 user 0m0.024s 00:10:09.018 sys 0m0.081s 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:09.018 ************************************ 00:10:09.018 END TEST filesystem_ext4 00:10:09.018 ************************************ 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.018 ************************************ 00:10:09.018 START TEST filesystem_btrfs 00:10:09.018 ************************************ 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:09.018 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:09.019 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:09.019 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:09.019 09:42:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:09.019 btrfs-progs v6.8.1 00:10:09.019 See https://btrfs.readthedocs.io for more information. 00:10:09.019 00:10:09.019 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:09.019 NOTE: several default settings have changed in version 5.15, please make sure 00:10:09.019 this does not affect your deployments: 00:10:09.019 - DUP for metadata (-m dup) 00:10:09.019 - enabled no-holes (-O no-holes) 00:10:09.019 - enabled free-space-tree (-R free-space-tree) 00:10:09.019 00:10:09.019 Label: (null) 00:10:09.019 UUID: 8a7a0847-9f41-41c1-bf44-e3c1cb8242fd 00:10:09.019 Node size: 16384 00:10:09.019 Sector size: 4096 (CPU page size: 4096) 00:10:09.019 Filesystem size: 510.00MiB 00:10:09.019 Block group profiles: 00:10:09.019 Data: single 8.00MiB 00:10:09.019 Metadata: DUP 32.00MiB 00:10:09.019 System: DUP 8.00MiB 00:10:09.019 SSD detected: yes 00:10:09.019 Zoned device: no 00:10:09.019 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:09.019 Checksum: crc32c 00:10:09.019 Number of devices: 1 00:10:09.019 Devices: 00:10:09.019 ID SIZE PATH 00:10:09.019 1 510.00MiB /dev/nvme0n1p1 00:10:09.019 00:10:09.019 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:09.019 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2816265 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:09.587 00:10:09.587 real 0m0.793s 00:10:09.587 user 0m0.019s 00:10:09.587 sys 0m0.119s 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:09.587 ************************************ 00:10:09.587 END TEST filesystem_btrfs 00:10:09.587 ************************************ 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.587 ************************************ 00:10:09.587 START TEST filesystem_xfs 00:10:09.587 ************************************ 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:09.587 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:09.588 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:09.588 09:42:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:09.588 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:09.588 = sectsz=512 attr=2, projid32bit=1 00:10:09.588 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:09.588 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:09.588 data = bsize=4096 blocks=130560, imaxpct=25 00:10:09.588 = sunit=0 swidth=0 blks 00:10:09.588 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:09.588 log =internal log bsize=4096 blocks=16384, version=2 00:10:09.588 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:09.588 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:10.524 Discarding blocks...Done. 00:10:10.524 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:10.524 09:42:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2816265 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.430 00:10:12.430 real 0m2.990s 00:10:12.430 user 0m0.021s 00:10:12.430 sys 0m0.079s 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.430 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:12.430 ************************************ 00:10:12.430 END TEST filesystem_xfs 00:10:12.430 ************************************ 00:10:12.689 09:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2816265 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2816265 ']' 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2816265 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816265 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816265' 00:10:12.949 killing process with pid 2816265 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2816265 00:10:12.949 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2816265 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:13.518 00:10:13.518 real 0m22.315s 00:10:13.518 user 1m28.092s 00:10:13.518 sys 0m1.563s 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.518 ************************************ 00:10:13.518 END TEST nvmf_filesystem_no_in_capsule 00:10:13.518 ************************************ 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.518 ************************************ 00:10:13.518 START TEST nvmf_filesystem_in_capsule 00:10:13.518 ************************************ 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2820183 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2820183 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2820183 ']' 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.518 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.518 [2024-11-20 09:42:36.699603] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:10:13.518 [2024-11-20 09:42:36.699649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.518 [2024-11-20 09:42:36.777629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.518 [2024-11-20 09:42:36.820328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.518 [2024-11-20 09:42:36.820364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.518 [2024-11-20 09:42:36.820372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.518 [2024-11-20 09:42:36.820378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.518 [2024-11-20 09:42:36.820383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.518 [2024-11-20 09:42:36.821973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.518 [2024-11-20 09:42:36.822037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.518 [2024-11-20 09:42:36.822145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.518 [2024-11-20 09:42:36.822146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.777 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.777 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:13.777 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.777 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.778 [2024-11-20 09:42:36.959790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.778 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.778 Malloc1 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.778 [2024-11-20 09:42:37.102705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.778 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.037 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:14.037 { 00:10:14.037 "name": "Malloc1", 00:10:14.037 "aliases": [ 00:10:14.037 "3f1d7c85-f36f-4ccf-acbb-5ffd3ed47f7c" 00:10:14.037 ], 00:10:14.037 "product_name": "Malloc disk", 00:10:14.037 "block_size": 512, 00:10:14.037 "num_blocks": 1048576, 00:10:14.037 "uuid": "3f1d7c85-f36f-4ccf-acbb-5ffd3ed47f7c", 00:10:14.037 "assigned_rate_limits": { 00:10:14.037 "rw_ios_per_sec": 0, 00:10:14.037 "rw_mbytes_per_sec": 0, 00:10:14.037 "r_mbytes_per_sec": 0, 00:10:14.037 "w_mbytes_per_sec": 0 00:10:14.037 }, 00:10:14.037 "claimed": true, 00:10:14.037 "claim_type": "exclusive_write", 00:10:14.037 "zoned": false, 00:10:14.037 "supported_io_types": { 00:10:14.037 "read": true, 00:10:14.037 "write": true, 00:10:14.037 "unmap": true, 00:10:14.037 "flush": true, 00:10:14.037 "reset": true, 00:10:14.037 "nvme_admin": false, 00:10:14.037 "nvme_io": false, 00:10:14.037 "nvme_io_md": false, 00:10:14.037 "write_zeroes": true, 00:10:14.037 "zcopy": true, 00:10:14.037 "get_zone_info": false, 00:10:14.037 "zone_management": false, 00:10:14.037 "zone_append": false, 00:10:14.037 "compare": false, 00:10:14.037 "compare_and_write": false, 00:10:14.037 "abort": true, 00:10:14.037 "seek_hole": false, 00:10:14.037 "seek_data": false, 00:10:14.037 "copy": true, 00:10:14.037 "nvme_iov_md": false 00:10:14.037 }, 00:10:14.037 "memory_domains": [ 00:10:14.037 { 00:10:14.038 "dma_device_id": "system", 00:10:14.038 "dma_device_type": 1 00:10:14.038 }, 00:10:14.038 { 00:10:14.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.038 "dma_device_type": 2 00:10:14.038 } 00:10:14.038 ], 00:10:14.038 "driver_specific": {} 00:10:14.038 } 00:10:14.038 ]' 00:10:14.038 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:14.038 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:14.038 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:14.038 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:14.038 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:14.038 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:14.038 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:14.038 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.523 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.523 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:15.523 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.523 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:15.523 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:17.423 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:17.682 09:42:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:18.619 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:18.619 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:18.619 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:18.619 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.619 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.878 ************************************ 00:10:18.878 START TEST filesystem_in_capsule_ext4 00:10:18.878 ************************************ 00:10:18.878 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:18.878 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:18.879 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:18.879 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:18.879 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:18.879 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:18.879 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:18.879 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:18.879 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:18.879 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:18.879 09:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:18.879 mke2fs 1.47.0 (5-Feb-2023) 00:10:18.879 Discarding device blocks: 0/522240 done 00:10:18.879 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:18.879 Filesystem UUID: cb8a80e7-b5c9-4049-9bea-26e24a744ca0 00:10:18.879 Superblock backups stored on blocks: 00:10:18.879 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:18.879 00:10:18.879 Allocating group tables: 0/64 done 00:10:18.879 Writing inode tables: 0/64 done 00:10:18.879 Creating journal (8192 blocks): done 00:10:21.085 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:10:21.085 00:10:21.085 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:21.085 09:42:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2820183 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.657 00:10:27.657 real 0m8.530s 00:10:27.657 user 0m0.027s 00:10:27.657 sys 0m0.074s 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:27.657 ************************************ 00:10:27.657 END TEST filesystem_in_capsule_ext4 00:10:27.657 ************************************ 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.657 ************************************ 00:10:27.657 START TEST filesystem_in_capsule_btrfs 00:10:27.657 ************************************ 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:27.657 btrfs-progs v6.8.1 00:10:27.657 See https://btrfs.readthedocs.io for more information. 00:10:27.657 00:10:27.657 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:27.657 NOTE: several default settings have changed in version 5.15, please make sure 00:10:27.657 this does not affect your deployments: 00:10:27.657 - DUP for metadata (-m dup) 00:10:27.657 - enabled no-holes (-O no-holes) 00:10:27.657 - enabled free-space-tree (-R free-space-tree) 00:10:27.657 00:10:27.657 Label: (null) 00:10:27.657 UUID: 306ac032-5ce1-46f1-8738-fe7e309ebb5b 00:10:27.657 Node size: 16384 00:10:27.657 Sector size: 4096 (CPU page size: 4096) 00:10:27.657 Filesystem size: 510.00MiB 00:10:27.657 Block group profiles: 00:10:27.657 Data: single 8.00MiB 00:10:27.657 Metadata: DUP 32.00MiB 00:10:27.657 System: DUP 8.00MiB 00:10:27.657 SSD detected: yes 00:10:27.657 Zoned device: no 00:10:27.657 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:27.657 Checksum: crc32c 00:10:27.657 Number of devices: 1 00:10:27.657 Devices: 00:10:27.657 ID SIZE PATH 00:10:27.657 1 510.00MiB /dev/nvme0n1p1 00:10:27.657 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:27.657 09:42:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2820183 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.226 00:10:28.226 real 0m0.826s 00:10:28.226 user 0m0.030s 00:10:28.226 sys 0m0.111s 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 ************************************ 00:10:28.226 END TEST filesystem_in_capsule_btrfs 00:10:28.226 ************************************ 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 ************************************ 00:10:28.226 START TEST filesystem_in_capsule_xfs 00:10:28.226 ************************************ 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:28.226 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:28.226 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:28.226 = sectsz=512 attr=2, projid32bit=1 00:10:28.226 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:28.226 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:28.226 data = bsize=4096 blocks=130560, imaxpct=25 00:10:28.226 = sunit=0 swidth=0 blks 00:10:28.226 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:28.226 log =internal log bsize=4096 blocks=16384, version=2 00:10:28.226 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:28.226 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:29.164 Discarding blocks...Done. 00:10:29.164 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:29.164 09:42:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:31.068 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:31.068 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:31.068 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:31.068 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:31.068 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:31.068 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:31.069 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2820183 00:10:31.069 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:31.069 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:31.069 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:31.069 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:31.069 00:10:31.069 real 0m2.796s 00:10:31.069 user 0m0.022s 00:10:31.069 sys 0m0.076s 00:10:31.069 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.069 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:31.069 ************************************ 00:10:31.069 END TEST filesystem_in_capsule_xfs 00:10:31.069 ************************************ 00:10:31.069 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:31.327 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:31.327 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2820183 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2820183 ']' 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2820183 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2820183 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2820183' 00:10:31.586 killing process with pid 2820183 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2820183 00:10:31.586 09:42:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2820183 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:31.845 00:10:31.845 real 0m18.432s 00:10:31.845 user 1m12.565s 00:10:31.845 sys 0m1.414s 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.845 ************************************ 00:10:31.845 END TEST nvmf_filesystem_in_capsule 00:10:31.845 ************************************ 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.845 rmmod nvme_tcp 00:10:31.845 rmmod nvme_fabrics 00:10:31.845 rmmod nvme_keyring 00:10:31.845 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:31.846 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.105 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.105 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.105 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.105 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.105 09:42:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.011 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.011 00:10:34.011 real 0m49.545s 00:10:34.011 user 2m42.732s 00:10:34.011 sys 0m7.748s 00:10:34.011 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.011 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:34.011 ************************************ 00:10:34.011 END TEST nvmf_filesystem 00:10:34.011 ************************************ 00:10:34.012 09:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:34.012 09:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:34.012 09:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.012 09:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:34.012 ************************************ 00:10:34.012 START TEST nvmf_target_discovery 00:10:34.012 ************************************ 00:10:34.012 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:34.364 * Looking for test storage... 00:10:34.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # lcov --version 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:10:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.364 --rc genhtml_branch_coverage=1 00:10:34.364 --rc genhtml_function_coverage=1 00:10:34.364 --rc genhtml_legend=1 00:10:34.364 --rc geninfo_all_blocks=1 00:10:34.364 --rc geninfo_unexecuted_blocks=1 00:10:34.364 00:10:34.364 ' 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:10:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.364 --rc genhtml_branch_coverage=1 00:10:34.364 --rc genhtml_function_coverage=1 00:10:34.364 --rc genhtml_legend=1 00:10:34.364 --rc geninfo_all_blocks=1 00:10:34.364 --rc geninfo_unexecuted_blocks=1 00:10:34.364 00:10:34.364 ' 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:10:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.364 --rc genhtml_branch_coverage=1 00:10:34.364 --rc genhtml_function_coverage=1 00:10:34.364 --rc genhtml_legend=1 00:10:34.364 --rc geninfo_all_blocks=1 00:10:34.364 --rc geninfo_unexecuted_blocks=1 00:10:34.364 00:10:34.364 ' 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:10:34.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.364 --rc genhtml_branch_coverage=1 00:10:34.364 --rc genhtml_function_coverage=1 00:10:34.364 --rc genhtml_legend=1 00:10:34.364 --rc geninfo_all_blocks=1 00:10:34.364 --rc geninfo_unexecuted_blocks=1 00:10:34.364 00:10:34.364 ' 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.364 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.365 09:42:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.997 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.997 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.997 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.998 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.998 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:10:40.998 00:10:40.998 --- 10.0.0.2 ping statistics --- 00:10:40.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.998 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:10:40.998 00:10:40.998 --- 10.0.0.1 ping statistics --- 00:10:40.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.998 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2826929 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2826929 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2826929 ']' 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.998 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.998 [2024-11-20 09:43:03.510981] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:10:40.998 [2024-11-20 09:43:03.511032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.998 [2024-11-20 09:43:03.589908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.998 [2024-11-20 09:43:03.633719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.998 [2024-11-20 09:43:03.633758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.998 [2024-11-20 09:43:03.633765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.998 [2024-11-20 09:43:03.633771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.998 [2024-11-20 09:43:03.633777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.998 [2024-11-20 09:43:03.635358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.998 [2024-11-20 09:43:03.635469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.999 [2024-11-20 09:43:03.635574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.999 [2024-11-20 09:43:03.635575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 [2024-11-20 09:43:03.773084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 Null1 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 [2024-11-20 09:43:03.818494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 Null2 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 Null3 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 Null4 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.999 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.000 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:41.000 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.000 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.000 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.000 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:41.000 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.000 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.000 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.000 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:41.000 00:10:41.000 Discovery Log Number of Records 6, Generation counter 6 00:10:41.000 =====Discovery Log Entry 0====== 00:10:41.000 trtype: tcp 00:10:41.000 adrfam: ipv4 00:10:41.000 subtype: current discovery subsystem 00:10:41.000 treq: not required 00:10:41.000 portid: 0 00:10:41.000 trsvcid: 4420 00:10:41.000 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:41.000 traddr: 10.0.0.2 00:10:41.000 eflags: explicit discovery connections, duplicate discovery information 00:10:41.000 sectype: none 00:10:41.000 =====Discovery Log Entry 1====== 00:10:41.000 trtype: tcp 00:10:41.000 adrfam: ipv4 00:10:41.000 subtype: nvme subsystem 00:10:41.000 treq: not required 00:10:41.000 portid: 0 00:10:41.000 trsvcid: 4420 00:10:41.000 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:41.000 traddr: 10.0.0.2 00:10:41.000 eflags: none 00:10:41.000 sectype: none 00:10:41.000 =====Discovery Log Entry 2====== 00:10:41.000 trtype: tcp 00:10:41.000 adrfam: ipv4 00:10:41.000 subtype: nvme subsystem 00:10:41.000 treq: not required 00:10:41.000 portid: 0 00:10:41.000 trsvcid: 4420 00:10:41.000 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:41.000 traddr: 10.0.0.2 00:10:41.000 eflags: none 00:10:41.000 sectype: none 00:10:41.000 =====Discovery Log Entry 3====== 00:10:41.000 trtype: tcp 00:10:41.000 adrfam: ipv4 00:10:41.000 subtype: nvme subsystem 00:10:41.000 treq: not required 00:10:41.000 portid: 0 00:10:41.000 trsvcid: 4420 00:10:41.000 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:41.000 traddr: 10.0.0.2 00:10:41.000 eflags: none 00:10:41.000 sectype: none 00:10:41.000 =====Discovery Log Entry 4====== 00:10:41.000 trtype: tcp 00:10:41.000 adrfam: ipv4 00:10:41.000 subtype: nvme subsystem 00:10:41.000 treq: not required 00:10:41.000 portid: 0 00:10:41.000 trsvcid: 4420 00:10:41.000 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:41.000 traddr: 10.0.0.2 00:10:41.000 eflags: none 00:10:41.000 sectype: none 00:10:41.000 =====Discovery Log Entry 5====== 00:10:41.000 trtype: tcp 00:10:41.000 adrfam: ipv4 00:10:41.000 subtype: discovery subsystem referral 00:10:41.000 treq: not required 00:10:41.000 portid: 0 00:10:41.000 trsvcid: 4430 00:10:41.000 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:41.000 traddr: 10.0.0.2 00:10:41.000 eflags: none 00:10:41.000 sectype: none 00:10:41.000 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:41.000 Perform nvmf subsystem discovery via RPC 00:10:41.000 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:41.000 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.000 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.000 [ 00:10:41.000 { 00:10:41.000 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:41.000 "subtype": "Discovery", 00:10:41.000 "listen_addresses": [ 00:10:41.000 { 00:10:41.000 "trtype": "TCP", 00:10:41.000 "adrfam": "IPv4", 00:10:41.000 "traddr": "10.0.0.2", 00:10:41.000 "trsvcid": "4420" 00:10:41.000 } 00:10:41.000 ], 00:10:41.000 "allow_any_host": true, 00:10:41.000 "hosts": [] 00:10:41.000 }, 00:10:41.000 { 00:10:41.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.000 "subtype": "NVMe", 00:10:41.000 "listen_addresses": [ 00:10:41.000 { 00:10:41.000 "trtype": "TCP", 00:10:41.000 "adrfam": "IPv4", 00:10:41.000 "traddr": "10.0.0.2", 00:10:41.000 "trsvcid": "4420" 00:10:41.000 } 00:10:41.000 ], 00:10:41.000 "allow_any_host": true, 00:10:41.000 "hosts": [], 00:10:41.000 "serial_number": "SPDK00000000000001", 00:10:41.000 "model_number": "SPDK bdev Controller", 00:10:41.000 "max_namespaces": 32, 00:10:41.000 "min_cntlid": 1, 00:10:41.000 "max_cntlid": 65519, 00:10:41.000 "namespaces": [ 00:10:41.000 { 00:10:41.000 "nsid": 1, 00:10:41.000 "bdev_name": "Null1", 00:10:41.000 "name": "Null1", 00:10:41.000 "nguid": "256579C9FF624695A417CFF32E1F1FBC", 00:10:41.000 "uuid": "256579c9-ff62-4695-a417-cff32e1f1fbc" 00:10:41.000 } 00:10:41.000 ] 00:10:41.000 }, 00:10:41.000 { 00:10:41.000 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:41.000 "subtype": "NVMe", 00:10:41.000 "listen_addresses": [ 00:10:41.000 { 00:10:41.000 "trtype": "TCP", 00:10:41.000 "adrfam": "IPv4", 00:10:41.000 "traddr": "10.0.0.2", 00:10:41.000 "trsvcid": "4420" 00:10:41.000 } 00:10:41.000 ], 00:10:41.000 "allow_any_host": true, 00:10:41.000 "hosts": [], 00:10:41.000 "serial_number": "SPDK00000000000002", 00:10:41.000 "model_number": "SPDK bdev Controller", 00:10:41.000 "max_namespaces": 32, 00:10:41.000 "min_cntlid": 1, 00:10:41.000 "max_cntlid": 65519, 00:10:41.000 "namespaces": [ 00:10:41.000 { 00:10:41.000 "nsid": 1, 00:10:41.000 "bdev_name": "Null2", 00:10:41.000 "name": "Null2", 00:10:41.000 "nguid": "D958F93F5E704B7C9DC2909FEB30F286", 00:10:41.000 "uuid": "d958f93f-5e70-4b7c-9dc2-909feb30f286" 00:10:41.000 } 00:10:41.000 ] 00:10:41.000 }, 00:10:41.000 { 00:10:41.000 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:41.000 "subtype": "NVMe", 00:10:41.000 "listen_addresses": [ 00:10:41.000 { 00:10:41.000 "trtype": "TCP", 00:10:41.000 "adrfam": "IPv4", 00:10:41.000 "traddr": "10.0.0.2", 00:10:41.000 "trsvcid": "4420" 00:10:41.000 } 00:10:41.000 ], 00:10:41.000 "allow_any_host": true, 00:10:41.000 "hosts": [], 00:10:41.000 "serial_number": "SPDK00000000000003", 00:10:41.000 "model_number": "SPDK bdev Controller", 00:10:41.000 "max_namespaces": 32, 00:10:41.000 "min_cntlid": 1, 00:10:41.000 "max_cntlid": 65519, 00:10:41.000 "namespaces": [ 00:10:41.000 { 00:10:41.000 "nsid": 1, 00:10:41.000 "bdev_name": "Null3", 00:10:41.000 "name": "Null3", 00:10:41.000 "nguid": "994FF3601DD342F5BC594BDA56C82977", 00:10:41.000 "uuid": "994ff360-1dd3-42f5-bc59-4bda56c82977" 00:10:41.000 } 00:10:41.000 ] 00:10:41.000 }, 00:10:41.001 { 00:10:41.001 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:41.001 "subtype": "NVMe", 00:10:41.001 "listen_addresses": [ 00:10:41.001 { 00:10:41.001 "trtype": "TCP", 00:10:41.001 "adrfam": "IPv4", 00:10:41.001 "traddr": "10.0.0.2", 00:10:41.001 "trsvcid": "4420" 00:10:41.001 } 00:10:41.001 ], 00:10:41.001 "allow_any_host": true, 00:10:41.001 "hosts": [], 00:10:41.001 "serial_number": "SPDK00000000000004", 00:10:41.001 "model_number": "SPDK bdev Controller", 00:10:41.001 "max_namespaces": 32, 00:10:41.001 "min_cntlid": 1, 00:10:41.001 "max_cntlid": 65519, 00:10:41.001 "namespaces": [ 00:10:41.001 { 00:10:41.001 "nsid": 1, 00:10:41.001 "bdev_name": "Null4", 00:10:41.001 "name": "Null4", 00:10:41.001 "nguid": "1FD1E12F70304895B0C143B58A9ECFC3", 00:10:41.001 "uuid": "1fd1e12f-7030-4895-b0c1-43b58a9ecfc3" 00:10:41.001 } 00:10:41.001 ] 00:10:41.001 } 00:10:41.001 ] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.001 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.001 rmmod nvme_tcp 00:10:41.260 rmmod nvme_fabrics 00:10:41.260 rmmod nvme_keyring 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2826929 ']' 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2826929 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2826929 ']' 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2826929 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826929 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.260 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826929' 00:10:41.260 killing process with pid 2826929 00:10:41.261 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2826929 00:10:41.261 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2826929 00:10:41.519 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.519 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.519 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.519 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:41.520 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:41.520 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.520 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.520 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.520 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.520 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.520 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.520 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.425 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.425 00:10:43.425 real 0m9.348s 00:10:43.425 user 0m5.640s 00:10:43.425 sys 0m4.894s 00:10:43.425 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.425 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.425 ************************************ 00:10:43.425 END TEST nvmf_target_discovery 00:10:43.425 ************************************ 00:10:43.425 09:43:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.425 09:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.425 09:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.425 09:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.425 ************************************ 00:10:43.425 START TEST nvmf_referrals 00:10:43.425 ************************************ 00:10:43.425 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.685 * Looking for test storage... 00:10:43.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # lcov --version 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:10:43.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.685 --rc genhtml_branch_coverage=1 00:10:43.685 --rc genhtml_function_coverage=1 00:10:43.685 --rc genhtml_legend=1 00:10:43.685 --rc geninfo_all_blocks=1 00:10:43.685 --rc geninfo_unexecuted_blocks=1 00:10:43.685 00:10:43.685 ' 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:10:43.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.685 --rc genhtml_branch_coverage=1 00:10:43.685 --rc genhtml_function_coverage=1 00:10:43.685 --rc genhtml_legend=1 00:10:43.685 --rc geninfo_all_blocks=1 00:10:43.685 --rc geninfo_unexecuted_blocks=1 00:10:43.685 00:10:43.685 ' 00:10:43.685 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:10:43.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.686 --rc genhtml_branch_coverage=1 00:10:43.686 --rc genhtml_function_coverage=1 00:10:43.686 --rc genhtml_legend=1 00:10:43.686 --rc geninfo_all_blocks=1 00:10:43.686 --rc geninfo_unexecuted_blocks=1 00:10:43.686 00:10:43.686 ' 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:10:43.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.686 --rc genhtml_branch_coverage=1 00:10:43.686 --rc genhtml_function_coverage=1 00:10:43.686 --rc genhtml_legend=1 00:10:43.686 --rc geninfo_all_blocks=1 00:10:43.686 --rc geninfo_unexecuted_blocks=1 00:10:43.686 00:10:43.686 ' 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.686 09:43:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:50.259 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:50.259 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:50.259 Found net devices under 0000:86:00.0: cvl_0_0 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:50.259 Found net devices under 0000:86:00.1: cvl_0_1 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.259 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:10:50.260 00:10:50.260 --- 10.0.0.2 ping statistics --- 00:10:50.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.260 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:10:50.260 00:10:50.260 --- 10.0.0.1 ping statistics --- 00:10:50.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.260 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2830599 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2830599 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2830599 ']' 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.260 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 [2024-11-20 09:43:12.987986] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:10:50.260 [2024-11-20 09:43:12.988032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.260 [2024-11-20 09:43:13.067705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.260 [2024-11-20 09:43:13.111131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.260 [2024-11-20 09:43:13.111167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.260 [2024-11-20 09:43:13.111175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.260 [2024-11-20 09:43:13.111181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.260 [2024-11-20 09:43:13.111186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.260 [2024-11-20 09:43:13.112654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.260 [2024-11-20 09:43:13.112763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.260 [2024-11-20 09:43:13.112779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.260 [2024-11-20 09:43:13.112784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 [2024-11-20 09:43:13.261583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 [2024-11-20 09:43:13.274833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.260 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.261 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.520 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.779 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.779 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:50.779 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:50.779 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:50.779 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.779 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.779 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.779 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.779 09:43:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.779 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:50.779 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:50.779 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:50.779 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:50.779 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:50.779 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.779 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:51.038 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:51.038 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:51.038 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:51.038 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:51.038 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.038 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.297 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:51.556 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:51.557 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:51.557 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:51.557 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:51.557 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.557 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.816 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.816 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:51.816 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:51.816 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:51.816 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:51.816 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.816 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:51.816 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.075 rmmod nvme_tcp 00:10:52.075 rmmod nvme_fabrics 00:10:52.075 rmmod nvme_keyring 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2830599 ']' 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2830599 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2830599 ']' 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2830599 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2830599 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2830599' 00:10:52.075 killing process with pid 2830599 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2830599 00:10:52.075 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2830599 00:10:52.334 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.334 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.335 09:43:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.240 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.240 00:10:54.240 real 0m10.807s 00:10:54.240 user 0m12.080s 00:10:54.240 sys 0m5.196s 00:10:54.240 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.240 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.240 ************************************ 00:10:54.240 END TEST nvmf_referrals 00:10:54.240 ************************************ 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.500 ************************************ 00:10:54.500 START TEST nvmf_connect_disconnect 00:10:54.500 ************************************ 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:54.500 * Looking for test storage... 00:10:54.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # lcov --version 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:10:54.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.500 --rc genhtml_branch_coverage=1 00:10:54.500 --rc genhtml_function_coverage=1 00:10:54.500 --rc genhtml_legend=1 00:10:54.500 --rc geninfo_all_blocks=1 00:10:54.500 --rc geninfo_unexecuted_blocks=1 00:10:54.500 00:10:54.500 ' 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:10:54.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.500 --rc genhtml_branch_coverage=1 00:10:54.500 --rc genhtml_function_coverage=1 00:10:54.500 --rc genhtml_legend=1 00:10:54.500 --rc geninfo_all_blocks=1 00:10:54.500 --rc geninfo_unexecuted_blocks=1 00:10:54.500 00:10:54.500 ' 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:10:54.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.500 --rc genhtml_branch_coverage=1 00:10:54.500 --rc genhtml_function_coverage=1 00:10:54.500 --rc genhtml_legend=1 00:10:54.500 --rc geninfo_all_blocks=1 00:10:54.500 --rc geninfo_unexecuted_blocks=1 00:10:54.500 00:10:54.500 ' 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:10:54.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.500 --rc genhtml_branch_coverage=1 00:10:54.500 --rc genhtml_function_coverage=1 00:10:54.500 --rc genhtml_legend=1 00:10:54.500 --rc geninfo_all_blocks=1 00:10:54.500 --rc geninfo_unexecuted_blocks=1 00:10:54.500 00:10:54.500 ' 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.500 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.501 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.760 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.760 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.760 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:54.760 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.760 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.761 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.334 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:01.335 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:01.335 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:01.335 Found net devices under 0000:86:00.0: cvl_0_0 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:01.335 Found net devices under 0000:86:00.1: cvl_0_1 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:11:01.335 00:11:01.335 --- 10.0.0.2 ping statistics --- 00:11:01.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.335 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:11:01.335 00:11:01.335 --- 10.0.0.1 ping statistics --- 00:11:01.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.335 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.335 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2834574 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2834574 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2834574 ']' 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.336 09:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.336 [2024-11-20 09:43:23.849311] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:11:01.336 [2024-11-20 09:43:23.849364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.336 [2024-11-20 09:43:23.929325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.336 [2024-11-20 09:43:23.972926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.336 [2024-11-20 09:43:23.972971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.336 [2024-11-20 09:43:23.972978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.336 [2024-11-20 09:43:23.972984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.336 [2024-11-20 09:43:23.972990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.336 [2024-11-20 09:43:23.974429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.336 [2024-11-20 09:43:23.974540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.336 [2024-11-20 09:43:23.974645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.336 [2024-11-20 09:43:23.974647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.336 [2024-11-20 09:43:24.112281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:01.336 [2024-11-20 09:43:24.178417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:01.336 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:04.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:17.769 rmmod nvme_tcp 00:11:17.769 rmmod nvme_fabrics 00:11:17.769 rmmod nvme_keyring 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2834574 ']' 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2834574 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2834574 ']' 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2834574 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834574 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834574' 00:11:17.769 killing process with pid 2834574 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2834574 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2834574 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.769 09:43:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.675 00:11:19.675 real 0m25.180s 00:11:19.675 user 1m8.334s 00:11:19.675 sys 0m5.742s 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:19.675 ************************************ 00:11:19.675 END TEST nvmf_connect_disconnect 00:11:19.675 ************************************ 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.675 ************************************ 00:11:19.675 START TEST nvmf_multitarget 00:11:19.675 ************************************ 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:19.675 * Looking for test storage... 00:11:19.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # lcov --version 00:11:19.675 09:43:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.935 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:11:19.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.936 --rc genhtml_branch_coverage=1 00:11:19.936 --rc genhtml_function_coverage=1 00:11:19.936 --rc genhtml_legend=1 00:11:19.936 --rc geninfo_all_blocks=1 00:11:19.936 --rc geninfo_unexecuted_blocks=1 00:11:19.936 00:11:19.936 ' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:11:19.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.936 --rc genhtml_branch_coverage=1 00:11:19.936 --rc genhtml_function_coverage=1 00:11:19.936 --rc genhtml_legend=1 00:11:19.936 --rc geninfo_all_blocks=1 00:11:19.936 --rc geninfo_unexecuted_blocks=1 00:11:19.936 00:11:19.936 ' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:11:19.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.936 --rc genhtml_branch_coverage=1 00:11:19.936 --rc genhtml_function_coverage=1 00:11:19.936 --rc genhtml_legend=1 00:11:19.936 --rc geninfo_all_blocks=1 00:11:19.936 --rc geninfo_unexecuted_blocks=1 00:11:19.936 00:11:19.936 ' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:11:19.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.936 --rc genhtml_branch_coverage=1 00:11:19.936 --rc genhtml_function_coverage=1 00:11:19.936 --rc genhtml_legend=1 00:11:19.936 --rc geninfo_all_blocks=1 00:11:19.936 --rc geninfo_unexecuted_blocks=1 00:11:19.936 00:11:19.936 ' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.936 09:43:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:26.508 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:26.508 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:26.508 Found net devices under 0000:86:00.0: cvl_0_0 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.508 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:26.509 Found net devices under 0000:86:00.1: cvl_0_1 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:11:26.509 00:11:26.509 --- 10.0.0.2 ping statistics --- 00:11:26.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.509 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:11:26.509 09:43:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:11:26.509 00:11:26.509 --- 10.0.0.1 ping statistics --- 00:11:26.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.509 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2840957 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2840957 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2840957 ']' 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.509 [2024-11-20 09:43:49.098285] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:11:26.509 [2024-11-20 09:43:49.098330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.509 [2024-11-20 09:43:49.175449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.509 [2024-11-20 09:43:49.216371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.509 [2024-11-20 09:43:49.216410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.509 [2024-11-20 09:43:49.216417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.509 [2024-11-20 09:43:49.216423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.509 [2024-11-20 09:43:49.216429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.509 [2024-11-20 09:43:49.217968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.509 [2024-11-20 09:43:49.218080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.509 [2024-11-20 09:43:49.218190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.509 [2024-11-20 09:43:49.218190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:26.509 "nvmf_tgt_1" 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:26.509 "nvmf_tgt_2" 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:26.509 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:26.768 true 00:11:26.768 09:43:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:26.768 true 00:11:26.768 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:26.768 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.027 rmmod nvme_tcp 00:11:27.027 rmmod nvme_fabrics 00:11:27.027 rmmod nvme_keyring 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2840957 ']' 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2840957 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2840957 ']' 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2840957 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.027 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2840957 00:11:27.028 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.028 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.028 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2840957' 00:11:27.028 killing process with pid 2840957 00:11:27.028 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2840957 00:11:27.028 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2840957 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.286 09:43:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.191 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.191 00:11:29.191 real 0m9.623s 00:11:29.191 user 0m7.353s 00:11:29.191 sys 0m4.886s 00:11:29.191 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.191 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.191 ************************************ 00:11:29.191 END TEST nvmf_multitarget 00:11:29.191 ************************************ 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.451 ************************************ 00:11:29.451 START TEST nvmf_rpc 00:11:29.451 ************************************ 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:29.451 * Looking for test storage... 00:11:29.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # lcov --version 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:11:29.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.451 --rc genhtml_branch_coverage=1 00:11:29.451 --rc genhtml_function_coverage=1 00:11:29.451 --rc genhtml_legend=1 00:11:29.451 --rc geninfo_all_blocks=1 00:11:29.451 --rc geninfo_unexecuted_blocks=1 00:11:29.451 00:11:29.451 ' 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:11:29.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.451 --rc genhtml_branch_coverage=1 00:11:29.451 --rc genhtml_function_coverage=1 00:11:29.451 --rc genhtml_legend=1 00:11:29.451 --rc geninfo_all_blocks=1 00:11:29.451 --rc geninfo_unexecuted_blocks=1 00:11:29.451 00:11:29.451 ' 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:11:29.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.451 --rc genhtml_branch_coverage=1 00:11:29.451 --rc genhtml_function_coverage=1 00:11:29.451 --rc genhtml_legend=1 00:11:29.451 --rc geninfo_all_blocks=1 00:11:29.451 --rc geninfo_unexecuted_blocks=1 00:11:29.451 00:11:29.451 ' 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:11:29.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.451 --rc genhtml_branch_coverage=1 00:11:29.451 --rc genhtml_function_coverage=1 00:11:29.451 --rc genhtml_legend=1 00:11:29.451 --rc geninfo_all_blocks=1 00:11:29.451 --rc geninfo_unexecuted_blocks=1 00:11:29.451 00:11:29.451 ' 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.451 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.452 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.711 09:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:36.279 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:36.279 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:36.279 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:36.280 Found net devices under 0000:86:00.0: cvl_0_0 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:36.280 Found net devices under 0000:86:00.1: cvl_0_1 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:11:36.280 00:11:36.280 --- 10.0.0.2 ping statistics --- 00:11:36.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.280 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:11:36.280 00:11:36.280 --- 10.0.0.1 ping statistics --- 00:11:36.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.280 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2844745 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2844745 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2844745 ']' 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.280 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.281 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.281 09:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.281 [2024-11-20 09:43:58.817435] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:11:36.281 [2024-11-20 09:43:58.817486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.281 [2024-11-20 09:43:58.896269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.281 [2024-11-20 09:43:58.940127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.281 [2024-11-20 09:43:58.940166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.281 [2024-11-20 09:43:58.940173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.281 [2024-11-20 09:43:58.940179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.281 [2024-11-20 09:43:58.940184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.281 [2024-11-20 09:43:58.941646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.281 [2024-11-20 09:43:58.941753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.281 [2024-11-20 09:43:58.941860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.281 [2024-11-20 09:43:58.941860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:36.281 "tick_rate": 2300000000, 00:11:36.281 "poll_groups": [ 00:11:36.281 { 00:11:36.281 "name": "nvmf_tgt_poll_group_000", 00:11:36.281 "admin_qpairs": 0, 00:11:36.281 "io_qpairs": 0, 00:11:36.281 "current_admin_qpairs": 0, 00:11:36.281 "current_io_qpairs": 0, 00:11:36.281 "pending_bdev_io": 0, 00:11:36.281 "completed_nvme_io": 0, 00:11:36.281 "transports": [] 00:11:36.281 }, 00:11:36.281 { 00:11:36.281 "name": "nvmf_tgt_poll_group_001", 00:11:36.281 "admin_qpairs": 0, 00:11:36.281 "io_qpairs": 0, 00:11:36.281 "current_admin_qpairs": 0, 00:11:36.281 "current_io_qpairs": 0, 00:11:36.281 "pending_bdev_io": 0, 00:11:36.281 "completed_nvme_io": 0, 00:11:36.281 "transports": [] 00:11:36.281 }, 00:11:36.281 { 00:11:36.281 "name": "nvmf_tgt_poll_group_002", 00:11:36.281 "admin_qpairs": 0, 00:11:36.281 "io_qpairs": 0, 00:11:36.281 "current_admin_qpairs": 0, 00:11:36.281 "current_io_qpairs": 0, 00:11:36.281 "pending_bdev_io": 0, 00:11:36.281 "completed_nvme_io": 0, 00:11:36.281 "transports": [] 00:11:36.281 }, 00:11:36.281 { 00:11:36.281 "name": "nvmf_tgt_poll_group_003", 00:11:36.281 "admin_qpairs": 0, 00:11:36.281 "io_qpairs": 0, 00:11:36.281 "current_admin_qpairs": 0, 00:11:36.281 "current_io_qpairs": 0, 00:11:36.281 "pending_bdev_io": 0, 00:11:36.281 "completed_nvme_io": 0, 00:11:36.281 "transports": [] 00:11:36.281 } 00:11:36.281 ] 00:11:36.281 }' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.281 [2024-11-20 09:43:59.187754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:36.281 "tick_rate": 2300000000, 00:11:36.281 "poll_groups": [ 00:11:36.281 { 00:11:36.281 "name": "nvmf_tgt_poll_group_000", 00:11:36.281 "admin_qpairs": 0, 00:11:36.281 "io_qpairs": 0, 00:11:36.281 "current_admin_qpairs": 0, 00:11:36.281 "current_io_qpairs": 0, 00:11:36.281 "pending_bdev_io": 0, 00:11:36.281 "completed_nvme_io": 0, 00:11:36.281 "transports": [ 00:11:36.281 { 00:11:36.281 "trtype": "TCP" 00:11:36.281 } 00:11:36.281 ] 00:11:36.281 }, 00:11:36.281 { 00:11:36.281 "name": "nvmf_tgt_poll_group_001", 00:11:36.281 "admin_qpairs": 0, 00:11:36.281 "io_qpairs": 0, 00:11:36.281 "current_admin_qpairs": 0, 00:11:36.281 "current_io_qpairs": 0, 00:11:36.281 "pending_bdev_io": 0, 00:11:36.281 "completed_nvme_io": 0, 00:11:36.281 "transports": [ 00:11:36.281 { 00:11:36.281 "trtype": "TCP" 00:11:36.281 } 00:11:36.281 ] 00:11:36.281 }, 00:11:36.281 { 00:11:36.281 "name": "nvmf_tgt_poll_group_002", 00:11:36.281 "admin_qpairs": 0, 00:11:36.281 "io_qpairs": 0, 00:11:36.281 "current_admin_qpairs": 0, 00:11:36.281 "current_io_qpairs": 0, 00:11:36.281 "pending_bdev_io": 0, 00:11:36.281 "completed_nvme_io": 0, 00:11:36.281 "transports": [ 00:11:36.281 { 00:11:36.281 "trtype": "TCP" 00:11:36.281 } 00:11:36.281 ] 00:11:36.281 }, 00:11:36.281 { 00:11:36.281 "name": "nvmf_tgt_poll_group_003", 00:11:36.281 "admin_qpairs": 0, 00:11:36.281 "io_qpairs": 0, 00:11:36.281 "current_admin_qpairs": 0, 00:11:36.281 "current_io_qpairs": 0, 00:11:36.281 "pending_bdev_io": 0, 00:11:36.281 "completed_nvme_io": 0, 00:11:36.281 "transports": [ 00:11:36.281 { 00:11:36.281 "trtype": "TCP" 00:11:36.281 } 00:11:36.281 ] 00:11:36.281 } 00:11:36.281 ] 00:11:36.281 }' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:36.281 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.282 Malloc1 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.282 [2024-11-20 09:43:59.371618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:36.282 [2024-11-20 09:43:59.400219] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:36.282 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:36.282 could not add new controller: failed to write to nvme-fabrics device 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.282 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.658 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.658 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.658 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.658 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.658 09:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.562 [2024-11-20 09:44:02.813555] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:39.562 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:39.562 could not add new controller: failed to write to nvme-fabrics device 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.562 09:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.938 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.938 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:40.938 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.939 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:40.939 09:44:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:42.843 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:42.843 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:42.843 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.843 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:42.843 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.843 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:42.843 09:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.843 [2024-11-20 09:44:06.116804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.843 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.221 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.221 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.221 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.221 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.221 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.125 [2024-11-20 09:44:09.411891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.125 09:44:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.502 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.502 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.502 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.502 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.502 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:49.405 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.665 [2024-11-20 09:44:12.776226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.665 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.043 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.043 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:51.043 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.043 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:51.043 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:52.953 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:52.953 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:52.953 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.953 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:52.953 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.953 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:52.953 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.953 [2024-11-20 09:44:16.106929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.953 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.954 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.954 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.954 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.954 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.954 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.954 09:44:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.332 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.332 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:54.332 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.332 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:54.332 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.236 [2024-11-20 09:44:19.412100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.236 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.615 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.615 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.615 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.615 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.615 09:44:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.521 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 [2024-11-20 09:44:22.726186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 [2024-11-20 09:44:22.774250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 [2024-11-20 09:44:22.822380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.522 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.782 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.782 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.782 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.782 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.782 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.782 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.782 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.782 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 [2024-11-20 09:44:22.870546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 [2024-11-20 09:44:22.918707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:59.783 "tick_rate": 2300000000, 00:11:59.783 "poll_groups": [ 00:11:59.783 { 00:11:59.783 "name": "nvmf_tgt_poll_group_000", 00:11:59.783 "admin_qpairs": 2, 00:11:59.783 "io_qpairs": 168, 00:11:59.783 "current_admin_qpairs": 0, 00:11:59.783 "current_io_qpairs": 0, 00:11:59.783 "pending_bdev_io": 0, 00:11:59.783 "completed_nvme_io": 267, 00:11:59.783 "transports": [ 00:11:59.783 { 00:11:59.783 "trtype": "TCP" 00:11:59.783 } 00:11:59.783 ] 00:11:59.783 }, 00:11:59.783 { 00:11:59.783 "name": "nvmf_tgt_poll_group_001", 00:11:59.783 "admin_qpairs": 2, 00:11:59.783 "io_qpairs": 168, 00:11:59.783 "current_admin_qpairs": 0, 00:11:59.783 "current_io_qpairs": 0, 00:11:59.783 "pending_bdev_io": 0, 00:11:59.783 "completed_nvme_io": 219, 00:11:59.783 "transports": [ 00:11:59.783 { 00:11:59.783 "trtype": "TCP" 00:11:59.783 } 00:11:59.783 ] 00:11:59.783 }, 00:11:59.783 { 00:11:59.783 "name": "nvmf_tgt_poll_group_002", 00:11:59.783 "admin_qpairs": 1, 00:11:59.783 "io_qpairs": 168, 00:11:59.783 "current_admin_qpairs": 0, 00:11:59.783 "current_io_qpairs": 0, 00:11:59.783 "pending_bdev_io": 0, 00:11:59.783 "completed_nvme_io": 218, 00:11:59.783 "transports": [ 00:11:59.783 { 00:11:59.783 "trtype": "TCP" 00:11:59.783 } 00:11:59.783 ] 00:11:59.783 }, 00:11:59.783 { 00:11:59.783 "name": "nvmf_tgt_poll_group_003", 00:11:59.783 "admin_qpairs": 2, 00:11:59.783 "io_qpairs": 168, 00:11:59.783 "current_admin_qpairs": 0, 00:11:59.783 "current_io_qpairs": 0, 00:11:59.783 "pending_bdev_io": 0, 00:11:59.783 "completed_nvme_io": 318, 00:11:59.783 "transports": [ 00:11:59.783 { 00:11:59.783 "trtype": "TCP" 00:11:59.783 } 00:11:59.783 ] 00:11:59.783 } 00:11:59.783 ] 00:11:59.783 }' 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:59.783 09:44:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.783 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.783 rmmod nvme_tcp 00:11:59.783 rmmod nvme_fabrics 00:12:00.043 rmmod nvme_keyring 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2844745 ']' 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2844745 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2844745 ']' 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2844745 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2844745 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2844745' 00:12:00.043 killing process with pid 2844745 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2844745 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2844745 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.043 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:00.303 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:00.303 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.303 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.303 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.303 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.303 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.303 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.303 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.210 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.210 00:12:02.210 real 0m32.868s 00:12:02.210 user 1m38.966s 00:12:02.210 sys 0m6.592s 00:12:02.210 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.210 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.210 ************************************ 00:12:02.210 END TEST nvmf_rpc 00:12:02.210 ************************************ 00:12:02.210 09:44:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:02.210 09:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.210 09:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.210 09:44:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.210 ************************************ 00:12:02.210 START TEST nvmf_invalid 00:12:02.210 ************************************ 00:12:02.210 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:02.471 * Looking for test storage... 00:12:02.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # lcov --version 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:12:02.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.471 --rc genhtml_branch_coverage=1 00:12:02.471 --rc genhtml_function_coverage=1 00:12:02.471 --rc genhtml_legend=1 00:12:02.471 --rc geninfo_all_blocks=1 00:12:02.471 --rc geninfo_unexecuted_blocks=1 00:12:02.471 00:12:02.471 ' 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:12:02.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.471 --rc genhtml_branch_coverage=1 00:12:02.471 --rc genhtml_function_coverage=1 00:12:02.471 --rc genhtml_legend=1 00:12:02.471 --rc geninfo_all_blocks=1 00:12:02.471 --rc geninfo_unexecuted_blocks=1 00:12:02.471 00:12:02.471 ' 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:12:02.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.471 --rc genhtml_branch_coverage=1 00:12:02.471 --rc genhtml_function_coverage=1 00:12:02.471 --rc genhtml_legend=1 00:12:02.471 --rc geninfo_all_blocks=1 00:12:02.471 --rc geninfo_unexecuted_blocks=1 00:12:02.471 00:12:02.471 ' 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:12:02.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.471 --rc genhtml_branch_coverage=1 00:12:02.471 --rc genhtml_function_coverage=1 00:12:02.471 --rc genhtml_legend=1 00:12:02.471 --rc geninfo_all_blocks=1 00:12:02.471 --rc geninfo_unexecuted_blocks=1 00:12:02.471 00:12:02.471 ' 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:02.471 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.472 09:44:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.049 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:09.050 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:09.050 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:09.050 Found net devices under 0000:86:00.0: cvl_0_0 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:09.050 Found net devices under 0000:86:00.1: cvl_0_1 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:12:09.050 00:12:09.050 --- 10.0.0.2 ping statistics --- 00:12:09.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.050 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:12:09.050 00:12:09.050 --- 10.0.0.1 ping statistics --- 00:12:09.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.050 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2852351 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2852351 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2852351 ']' 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.050 09:44:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.050 [2024-11-20 09:44:31.793010] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:12:09.050 [2024-11-20 09:44:31.793053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.050 [2024-11-20 09:44:31.876433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.050 [2024-11-20 09:44:31.919838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.050 [2024-11-20 09:44:31.919889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.050 [2024-11-20 09:44:31.919896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.050 [2024-11-20 09:44:31.919902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.050 [2024-11-20 09:44:31.919907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.050 [2024-11-20 09:44:31.921526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.050 [2024-11-20 09:44:31.921644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.050 [2024-11-20 09:44:31.921660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.051 [2024-11-20 09:44:31.921665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.620 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.620 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:09.620 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.620 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.620 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:09.621 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.621 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:09.621 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18400 00:12:09.621 [2024-11-20 09:44:32.837875] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:09.621 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:09.621 { 00:12:09.621 "nqn": "nqn.2016-06.io.spdk:cnode18400", 00:12:09.621 "tgt_name": "foobar", 00:12:09.621 "method": "nvmf_create_subsystem", 00:12:09.621 "req_id": 1 00:12:09.621 } 00:12:09.621 Got JSON-RPC error response 00:12:09.621 response: 00:12:09.621 { 00:12:09.621 "code": -32603, 00:12:09.621 "message": "Unable to find target foobar" 00:12:09.621 }' 00:12:09.621 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:09.621 { 00:12:09.621 "nqn": "nqn.2016-06.io.spdk:cnode18400", 00:12:09.621 "tgt_name": "foobar", 00:12:09.621 "method": "nvmf_create_subsystem", 00:12:09.621 "req_id": 1 00:12:09.621 } 00:12:09.621 Got JSON-RPC error response 00:12:09.621 response: 00:12:09.621 { 00:12:09.621 "code": -32603, 00:12:09.621 "message": "Unable to find target foobar" 00:12:09.621 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:09.621 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:09.621 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32182 00:12:09.881 [2024-11-20 09:44:33.038595] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32182: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:09.881 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:09.881 { 00:12:09.881 "nqn": "nqn.2016-06.io.spdk:cnode32182", 00:12:09.881 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:09.881 "method": "nvmf_create_subsystem", 00:12:09.881 "req_id": 1 00:12:09.881 } 00:12:09.881 Got JSON-RPC error response 00:12:09.881 response: 00:12:09.881 { 00:12:09.881 "code": -32602, 00:12:09.881 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:09.881 }' 00:12:09.881 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:09.881 { 00:12:09.881 "nqn": "nqn.2016-06.io.spdk:cnode32182", 00:12:09.881 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:09.881 "method": "nvmf_create_subsystem", 00:12:09.881 "req_id": 1 00:12:09.881 } 00:12:09.881 Got JSON-RPC error response 00:12:09.881 response: 00:12:09.881 { 00:12:09.881 "code": -32602, 00:12:09.881 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:09.881 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:09.881 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:09.881 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5630 00:12:10.142 [2024-11-20 09:44:33.243247] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5630: invalid model number 'SPDK_Controller' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:10.142 { 00:12:10.142 "nqn": "nqn.2016-06.io.spdk:cnode5630", 00:12:10.142 "model_number": "SPDK_Controller\u001f", 00:12:10.142 "method": "nvmf_create_subsystem", 00:12:10.142 "req_id": 1 00:12:10.142 } 00:12:10.142 Got JSON-RPC error response 00:12:10.142 response: 00:12:10.142 { 00:12:10.142 "code": -32602, 00:12:10.142 "message": "Invalid MN SPDK_Controller\u001f" 00:12:10.142 }' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:10.142 { 00:12:10.142 "nqn": "nqn.2016-06.io.spdk:cnode5630", 00:12:10.142 "model_number": "SPDK_Controller\u001f", 00:12:10.142 "method": "nvmf_create_subsystem", 00:12:10.142 "req_id": 1 00:12:10.142 } 00:12:10.142 Got JSON-RPC error response 00:12:10.142 response: 00:12:10.142 { 00:12:10.142 "code": -32602, 00:12:10.142 "message": "Invalid MN SPDK_Controller\u001f" 00:12:10.142 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:10.142 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'KA'\'')IKUXGa4pwU v@;q+m' 00:12:10.143 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'KA'\'')IKUXGa4pwU v@;q+m' nqn.2016-06.io.spdk:cnode21306 00:12:10.404 [2024-11-20 09:44:33.584374] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21306: invalid serial number 'KA')IKUXGa4pwU v@;q+m' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:10.404 { 00:12:10.404 "nqn": "nqn.2016-06.io.spdk:cnode21306", 00:12:10.404 "serial_number": "KA'\'')IKUXGa4pwU v@;q+m", 00:12:10.404 "method": "nvmf_create_subsystem", 00:12:10.404 "req_id": 1 00:12:10.404 } 00:12:10.404 Got JSON-RPC error response 00:12:10.404 response: 00:12:10.404 { 00:12:10.404 "code": -32602, 00:12:10.404 "message": "Invalid SN KA'\'')IKUXGa4pwU v@;q+m" 00:12:10.404 }' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:10.404 { 00:12:10.404 "nqn": "nqn.2016-06.io.spdk:cnode21306", 00:12:10.404 "serial_number": "KA')IKUXGa4pwU v@;q+m", 00:12:10.404 "method": "nvmf_create_subsystem", 00:12:10.404 "req_id": 1 00:12:10.404 } 00:12:10.404 Got JSON-RPC error response 00:12:10.404 response: 00:12:10.404 { 00:12:10.404 "code": -32602, 00:12:10.404 "message": "Invalid SN KA')IKUXGa4pwU v@;q+m" 00:12:10.404 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:10.404 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.405 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.665 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ { == \- ]] 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '{n4Vb =Q jK)Wj|a,E:%7VTvIfUB=N {r"{LCsr+l' 00:12:10.666 09:44:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '{n4Vb =Q jK)Wj|a,E:%7VTvIfUB=N {r"{LCsr+l' nqn.2016-06.io.spdk:cnode10985 00:12:10.926 [2024-11-20 09:44:34.053965] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10985: invalid model number '{n4Vb =Q jK)Wj|a,E:%7VTvIfUB=N {r"{LCsr+l' 00:12:10.926 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:10.926 { 00:12:10.926 "nqn": "nqn.2016-06.io.spdk:cnode10985", 00:12:10.926 "model_number": "{n4Vb =Q jK)Wj|a,E:%7VTvIfUB=N {r\"{LCsr+l", 00:12:10.926 "method": "nvmf_create_subsystem", 00:12:10.926 "req_id": 1 00:12:10.926 } 00:12:10.926 Got JSON-RPC error response 00:12:10.926 response: 00:12:10.926 { 00:12:10.926 "code": -32602, 00:12:10.926 "message": "Invalid MN {n4Vb =Q jK)Wj|a,E:%7VTvIfUB=N {r\"{LCsr+l" 00:12:10.926 }' 00:12:10.926 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:10.926 { 00:12:10.926 "nqn": "nqn.2016-06.io.spdk:cnode10985", 00:12:10.926 "model_number": "{n4Vb =Q jK)Wj|a,E:%7VTvIfUB=N {r\"{LCsr+l", 00:12:10.926 "method": "nvmf_create_subsystem", 00:12:10.926 "req_id": 1 00:12:10.926 } 00:12:10.926 Got JSON-RPC error response 00:12:10.926 response: 00:12:10.926 { 00:12:10.926 "code": -32602, 00:12:10.926 "message": "Invalid MN {n4Vb =Q jK)Wj|a,E:%7VTvIfUB=N {r\"{LCsr+l" 00:12:10.926 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:10.926 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:10.926 [2024-11-20 09:44:34.254700] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.185 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:11.185 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:11.185 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:11.185 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:11.185 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:11.185 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:11.445 [2024-11-20 09:44:34.668074] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:11.445 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:11.445 { 00:12:11.445 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:11.445 "listen_address": { 00:12:11.445 "trtype": "tcp", 00:12:11.445 "traddr": "", 00:12:11.445 "trsvcid": "4421" 00:12:11.445 }, 00:12:11.445 "method": "nvmf_subsystem_remove_listener", 00:12:11.445 "req_id": 1 00:12:11.445 } 00:12:11.445 Got JSON-RPC error response 00:12:11.445 response: 00:12:11.445 { 00:12:11.445 "code": -32602, 00:12:11.445 "message": "Invalid parameters" 00:12:11.445 }' 00:12:11.445 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:11.445 { 00:12:11.445 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:11.445 "listen_address": { 00:12:11.445 "trtype": "tcp", 00:12:11.445 "traddr": "", 00:12:11.445 "trsvcid": "4421" 00:12:11.445 }, 00:12:11.445 "method": "nvmf_subsystem_remove_listener", 00:12:11.445 "req_id": 1 00:12:11.445 } 00:12:11.445 Got JSON-RPC error response 00:12:11.445 response: 00:12:11.445 { 00:12:11.445 "code": -32602, 00:12:11.445 "message": "Invalid parameters" 00:12:11.445 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:11.445 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14265 -i 0 00:12:11.704 [2024-11-20 09:44:34.872717] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14265: invalid cntlid range [0-65519] 00:12:11.704 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:11.704 { 00:12:11.704 "nqn": "nqn.2016-06.io.spdk:cnode14265", 00:12:11.704 "min_cntlid": 0, 00:12:11.704 "method": "nvmf_create_subsystem", 00:12:11.704 "req_id": 1 00:12:11.704 } 00:12:11.704 Got JSON-RPC error response 00:12:11.704 response: 00:12:11.704 { 00:12:11.704 "code": -32602, 00:12:11.704 "message": "Invalid cntlid range [0-65519]" 00:12:11.704 }' 00:12:11.704 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:11.704 { 00:12:11.704 "nqn": "nqn.2016-06.io.spdk:cnode14265", 00:12:11.704 "min_cntlid": 0, 00:12:11.704 "method": "nvmf_create_subsystem", 00:12:11.704 "req_id": 1 00:12:11.704 } 00:12:11.704 Got JSON-RPC error response 00:12:11.704 response: 00:12:11.704 { 00:12:11.704 "code": -32602, 00:12:11.704 "message": "Invalid cntlid range [0-65519]" 00:12:11.704 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.704 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24552 -i 65520 00:12:11.964 [2024-11-20 09:44:35.081443] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24552: invalid cntlid range [65520-65519] 00:12:11.964 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:11.964 { 00:12:11.964 "nqn": "nqn.2016-06.io.spdk:cnode24552", 00:12:11.964 "min_cntlid": 65520, 00:12:11.964 "method": "nvmf_create_subsystem", 00:12:11.964 "req_id": 1 00:12:11.964 } 00:12:11.964 Got JSON-RPC error response 00:12:11.964 response: 00:12:11.964 { 00:12:11.964 "code": -32602, 00:12:11.964 "message": "Invalid cntlid range [65520-65519]" 00:12:11.964 }' 00:12:11.964 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:11.964 { 00:12:11.964 "nqn": "nqn.2016-06.io.spdk:cnode24552", 00:12:11.964 "min_cntlid": 65520, 00:12:11.964 "method": "nvmf_create_subsystem", 00:12:11.964 "req_id": 1 00:12:11.964 } 00:12:11.964 Got JSON-RPC error response 00:12:11.964 response: 00:12:11.964 { 00:12:11.964 "code": -32602, 00:12:11.964 "message": "Invalid cntlid range [65520-65519]" 00:12:11.964 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.964 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30431 -I 0 00:12:11.964 [2024-11-20 09:44:35.294175] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30431: invalid cntlid range [1-0] 00:12:12.224 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:12.224 { 00:12:12.224 "nqn": "nqn.2016-06.io.spdk:cnode30431", 00:12:12.224 "max_cntlid": 0, 00:12:12.224 "method": "nvmf_create_subsystem", 00:12:12.224 "req_id": 1 00:12:12.224 } 00:12:12.224 Got JSON-RPC error response 00:12:12.224 response: 00:12:12.224 { 00:12:12.224 "code": -32602, 00:12:12.224 "message": "Invalid cntlid range [1-0]" 00:12:12.224 }' 00:12:12.224 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:12.224 { 00:12:12.224 "nqn": "nqn.2016-06.io.spdk:cnode30431", 00:12:12.224 "max_cntlid": 0, 00:12:12.224 "method": "nvmf_create_subsystem", 00:12:12.224 "req_id": 1 00:12:12.224 } 00:12:12.224 Got JSON-RPC error response 00:12:12.224 response: 00:12:12.224 { 00:12:12.224 "code": -32602, 00:12:12.224 "message": "Invalid cntlid range [1-0]" 00:12:12.224 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.224 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7489 -I 65520 00:12:12.224 [2024-11-20 09:44:35.494860] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7489: invalid cntlid range [1-65520] 00:12:12.224 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:12.224 { 00:12:12.224 "nqn": "nqn.2016-06.io.spdk:cnode7489", 00:12:12.224 "max_cntlid": 65520, 00:12:12.224 "method": "nvmf_create_subsystem", 00:12:12.224 "req_id": 1 00:12:12.224 } 00:12:12.224 Got JSON-RPC error response 00:12:12.224 response: 00:12:12.224 { 00:12:12.224 "code": -32602, 00:12:12.224 "message": "Invalid cntlid range [1-65520]" 00:12:12.224 }' 00:12:12.224 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:12.224 { 00:12:12.224 "nqn": "nqn.2016-06.io.spdk:cnode7489", 00:12:12.224 "max_cntlid": 65520, 00:12:12.224 "method": "nvmf_create_subsystem", 00:12:12.224 "req_id": 1 00:12:12.224 } 00:12:12.224 Got JSON-RPC error response 00:12:12.224 response: 00:12:12.224 { 00:12:12.224 "code": -32602, 00:12:12.224 "message": "Invalid cntlid range [1-65520]" 00:12:12.224 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.224 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20886 -i 6 -I 5 00:12:12.484 [2024-11-20 09:44:35.695546] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20886: invalid cntlid range [6-5] 00:12:12.484 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:12.484 { 00:12:12.484 "nqn": "nqn.2016-06.io.spdk:cnode20886", 00:12:12.484 "min_cntlid": 6, 00:12:12.484 "max_cntlid": 5, 00:12:12.484 "method": "nvmf_create_subsystem", 00:12:12.484 "req_id": 1 00:12:12.484 } 00:12:12.484 Got JSON-RPC error response 00:12:12.484 response: 00:12:12.484 { 00:12:12.484 "code": -32602, 00:12:12.484 "message": "Invalid cntlid range [6-5]" 00:12:12.484 }' 00:12:12.484 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:12.484 { 00:12:12.484 "nqn": "nqn.2016-06.io.spdk:cnode20886", 00:12:12.484 "min_cntlid": 6, 00:12:12.484 "max_cntlid": 5, 00:12:12.484 "method": "nvmf_create_subsystem", 00:12:12.484 "req_id": 1 00:12:12.484 } 00:12:12.484 Got JSON-RPC error response 00:12:12.484 response: 00:12:12.484 { 00:12:12.484 "code": -32602, 00:12:12.485 "message": "Invalid cntlid range [6-5]" 00:12:12.485 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.485 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:12.744 { 00:12:12.744 "name": "foobar", 00:12:12.744 "method": "nvmf_delete_target", 00:12:12.744 "req_id": 1 00:12:12.744 } 00:12:12.744 Got JSON-RPC error response 00:12:12.744 response: 00:12:12.744 { 00:12:12.744 "code": -32602, 00:12:12.744 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:12.744 }' 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:12.744 { 00:12:12.744 "name": "foobar", 00:12:12.744 "method": "nvmf_delete_target", 00:12:12.744 "req_id": 1 00:12:12.744 } 00:12:12.744 Got JSON-RPC error response 00:12:12.744 response: 00:12:12.744 { 00:12:12.744 "code": -32602, 00:12:12.744 "message": "The specified target doesn't exist, cannot delete it." 00:12:12.744 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.744 rmmod nvme_tcp 00:12:12.744 rmmod nvme_fabrics 00:12:12.744 rmmod nvme_keyring 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2852351 ']' 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2852351 00:12:12.744 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2852351 ']' 00:12:12.745 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2852351 00:12:12.745 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:12.745 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.745 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2852351 00:12:12.745 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.745 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.745 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2852351' 00:12:12.745 killing process with pid 2852351 00:12:12.745 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2852351 00:12:12.745 09:44:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2852351 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.005 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.913 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.913 00:12:14.913 real 0m12.674s 00:12:14.913 user 0m21.138s 00:12:14.913 sys 0m5.458s 00:12:14.913 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.913 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:14.913 ************************************ 00:12:14.913 END TEST nvmf_invalid 00:12:14.913 ************************************ 00:12:14.913 09:44:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:14.913 09:44:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.913 09:44:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.913 09:44:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.174 ************************************ 00:12:15.174 START TEST nvmf_connect_stress 00:12:15.174 ************************************ 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:15.174 * Looking for test storage... 00:12:15.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # lcov --version 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:12:15.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.174 --rc genhtml_branch_coverage=1 00:12:15.174 --rc genhtml_function_coverage=1 00:12:15.174 --rc genhtml_legend=1 00:12:15.174 --rc geninfo_all_blocks=1 00:12:15.174 --rc geninfo_unexecuted_blocks=1 00:12:15.174 00:12:15.174 ' 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:12:15.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.174 --rc genhtml_branch_coverage=1 00:12:15.174 --rc genhtml_function_coverage=1 00:12:15.174 --rc genhtml_legend=1 00:12:15.174 --rc geninfo_all_blocks=1 00:12:15.174 --rc geninfo_unexecuted_blocks=1 00:12:15.174 00:12:15.174 ' 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:12:15.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.174 --rc genhtml_branch_coverage=1 00:12:15.174 --rc genhtml_function_coverage=1 00:12:15.174 --rc genhtml_legend=1 00:12:15.174 --rc geninfo_all_blocks=1 00:12:15.174 --rc geninfo_unexecuted_blocks=1 00:12:15.174 00:12:15.174 ' 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:12:15.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.174 --rc genhtml_branch_coverage=1 00:12:15.174 --rc genhtml_function_coverage=1 00:12:15.174 --rc genhtml_legend=1 00:12:15.174 --rc geninfo_all_blocks=1 00:12:15.174 --rc geninfo_unexecuted_blocks=1 00:12:15.174 00:12:15.174 ' 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.174 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.175 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:21.945 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:21.946 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:21.946 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:21.946 Found net devices under 0000:86:00.0: cvl_0_0 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:21.946 Found net devices under 0000:86:00.1: cvl_0_1 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:12:21.946 00:12:21.946 --- 10.0.0.2 ping statistics --- 00:12:21.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.946 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:12:21.946 00:12:21.946 --- 10.0.0.1 ping statistics --- 00:12:21.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.946 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.946 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2856747 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2856747 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2856747 ']' 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.947 09:44:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.947 [2024-11-20 09:44:44.517655] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:12:21.947 [2024-11-20 09:44:44.517701] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.947 [2024-11-20 09:44:44.601574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:21.947 [2024-11-20 09:44:44.643926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.947 [2024-11-20 09:44:44.643967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.947 [2024-11-20 09:44:44.643974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.947 [2024-11-20 09:44:44.643981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.947 [2024-11-20 09:44:44.643986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.947 [2024-11-20 09:44:44.645392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.947 [2024-11-20 09:44:44.645500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.947 [2024-11-20 09:44:44.645501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.207 [2024-11-20 09:44:45.420100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.207 [2024-11-20 09:44:45.440283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.207 NULL1 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2856991 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.207 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.208 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.208 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.208 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.208 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.208 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.467 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.467 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.467 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:22.467 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:22.467 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:22.467 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.467 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.467 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.726 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.726 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:22.726 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.726 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.726 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.985 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.985 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:22.985 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.985 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.985 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.244 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.244 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:23.244 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.244 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.244 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.812 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.812 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:23.812 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.812 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.812 09:44:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.071 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.071 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:24.071 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.071 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.071 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.330 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.330 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:24.330 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.330 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.330 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.590 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.590 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:24.590 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.590 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.590 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.849 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.849 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:24.849 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.849 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.849 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.417 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.417 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:25.417 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.417 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.417 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.675 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.675 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:25.675 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.675 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.675 09:44:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.934 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.934 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:25.934 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.934 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.935 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.193 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.193 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:26.193 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.193 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.193 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.452 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.452 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:26.452 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.452 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.452 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.022 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.022 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:27.022 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.022 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.022 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.281 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.281 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:27.281 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.281 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.281 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.541 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.541 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:27.541 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.541 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.541 09:44:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.800 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.800 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:27.800 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.800 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.800 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.368 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.368 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:28.368 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.368 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.368 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.627 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.627 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:28.627 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.627 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.627 09:44:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.886 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.886 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:28.886 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.886 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.886 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.145 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.145 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:29.145 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.145 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.145 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.404 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.404 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:29.404 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.404 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.404 09:44:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.972 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.972 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:29.972 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.972 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.972 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.231 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.231 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:30.231 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.231 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.231 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.490 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.490 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:30.490 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.490 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.490 09:44:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.749 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.749 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:30.749 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.749 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.749 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.008 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.008 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:31.008 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.008 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.008 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.577 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.577 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:31.577 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.577 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.577 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.836 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.836 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:31.836 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.836 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.836 09:44:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.094 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.095 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:32.095 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.095 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.095 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.353 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2856991 00:12:32.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2856991) - No such process 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2856991 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.353 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.353 rmmod nvme_tcp 00:12:32.353 rmmod nvme_fabrics 00:12:32.353 rmmod nvme_keyring 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2856747 ']' 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2856747 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2856747 ']' 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2856747 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2856747 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2856747' 00:12:32.613 killing process with pid 2856747 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2856747 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2856747 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.613 09:44:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.152 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:35.152 00:12:35.152 real 0m19.722s 00:12:35.152 user 0m41.533s 00:12:35.152 sys 0m8.605s 00:12:35.152 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.152 09:44:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:35.152 ************************************ 00:12:35.152 END TEST nvmf_connect_stress 00:12:35.152 ************************************ 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:35.152 ************************************ 00:12:35.152 START TEST nvmf_fused_ordering 00:12:35.152 ************************************ 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:35.152 * Looking for test storage... 00:12:35.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # lcov --version 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:35.152 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:12:35.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.153 --rc genhtml_branch_coverage=1 00:12:35.153 --rc genhtml_function_coverage=1 00:12:35.153 --rc genhtml_legend=1 00:12:35.153 --rc geninfo_all_blocks=1 00:12:35.153 --rc geninfo_unexecuted_blocks=1 00:12:35.153 00:12:35.153 ' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:12:35.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.153 --rc genhtml_branch_coverage=1 00:12:35.153 --rc genhtml_function_coverage=1 00:12:35.153 --rc genhtml_legend=1 00:12:35.153 --rc geninfo_all_blocks=1 00:12:35.153 --rc geninfo_unexecuted_blocks=1 00:12:35.153 00:12:35.153 ' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:12:35.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.153 --rc genhtml_branch_coverage=1 00:12:35.153 --rc genhtml_function_coverage=1 00:12:35.153 --rc genhtml_legend=1 00:12:35.153 --rc geninfo_all_blocks=1 00:12:35.153 --rc geninfo_unexecuted_blocks=1 00:12:35.153 00:12:35.153 ' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:12:35.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:35.153 --rc genhtml_branch_coverage=1 00:12:35.153 --rc genhtml_function_coverage=1 00:12:35.153 --rc genhtml_legend=1 00:12:35.153 --rc geninfo_all_blocks=1 00:12:35.153 --rc geninfo_unexecuted_blocks=1 00:12:35.153 00:12:35.153 ' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:35.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:35.153 09:44:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:41.729 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:41.729 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:41.729 Found net devices under 0000:86:00.0: cvl_0_0 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.729 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:41.730 Found net devices under 0000:86:00.1: cvl_0_1 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.730 09:45:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:12:41.730 00:12:41.730 --- 10.0.0.2 ping statistics --- 00:12:41.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.730 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:12:41.730 00:12:41.730 --- 10.0.0.1 ping statistics --- 00:12:41.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.730 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2862287 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2862287 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2862287 ']' 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 [2024-11-20 09:45:04.321366] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:12:41.730 [2024-11-20 09:45:04.321414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.730 [2024-11-20 09:45:04.401907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.730 [2024-11-20 09:45:04.442640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.730 [2024-11-20 09:45:04.442675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.730 [2024-11-20 09:45:04.442682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.730 [2024-11-20 09:45:04.442689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.730 [2024-11-20 09:45:04.442694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.730 [2024-11-20 09:45:04.443281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 [2024-11-20 09:45:04.577908] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 [2024-11-20 09:45:04.598093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 NULL1 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.730 09:45:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:41.730 [2024-11-20 09:45:04.655875] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:12:41.730 [2024-11-20 09:45:04.655911] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862306 ] 00:12:41.990 Attached to nqn.2016-06.io.spdk:cnode1 00:12:41.990 Namespace ID: 1 size: 1GB 00:12:41.990 fused_ordering(0) 00:12:41.990 fused_ordering(1) 00:12:41.990 fused_ordering(2) 00:12:41.990 fused_ordering(3) 00:12:41.990 fused_ordering(4) 00:12:41.990 fused_ordering(5) 00:12:41.990 fused_ordering(6) 00:12:41.990 fused_ordering(7) 00:12:41.990 fused_ordering(8) 00:12:41.990 fused_ordering(9) 00:12:41.990 fused_ordering(10) 00:12:41.990 fused_ordering(11) 00:12:41.990 fused_ordering(12) 00:12:41.990 fused_ordering(13) 00:12:41.990 fused_ordering(14) 00:12:41.990 fused_ordering(15) 00:12:41.990 fused_ordering(16) 00:12:41.990 fused_ordering(17) 00:12:41.990 fused_ordering(18) 00:12:41.990 fused_ordering(19) 00:12:41.990 fused_ordering(20) 00:12:41.990 fused_ordering(21) 00:12:41.990 fused_ordering(22) 00:12:41.990 fused_ordering(23) 00:12:41.990 fused_ordering(24) 00:12:41.990 fused_ordering(25) 00:12:41.990 fused_ordering(26) 00:12:41.990 fused_ordering(27) 00:12:41.990 fused_ordering(28) 00:12:41.990 fused_ordering(29) 00:12:41.990 fused_ordering(30) 00:12:41.990 fused_ordering(31) 00:12:41.990 fused_ordering(32) 00:12:41.990 fused_ordering(33) 00:12:41.990 fused_ordering(34) 00:12:41.990 fused_ordering(35) 00:12:41.990 fused_ordering(36) 00:12:41.990 fused_ordering(37) 00:12:41.990 fused_ordering(38) 00:12:41.990 fused_ordering(39) 00:12:41.990 fused_ordering(40) 00:12:41.990 fused_ordering(41) 00:12:41.990 fused_ordering(42) 00:12:41.990 fused_ordering(43) 00:12:41.990 fused_ordering(44) 00:12:41.990 fused_ordering(45) 00:12:41.990 fused_ordering(46) 00:12:41.990 fused_ordering(47) 00:12:41.990 fused_ordering(48) 00:12:41.990 fused_ordering(49) 00:12:41.990 fused_ordering(50) 00:12:41.990 fused_ordering(51) 00:12:41.990 fused_ordering(52) 00:12:41.990 fused_ordering(53) 00:12:41.990 fused_ordering(54) 00:12:41.990 fused_ordering(55) 00:12:41.990 fused_ordering(56) 00:12:41.990 fused_ordering(57) 00:12:41.990 fused_ordering(58) 00:12:41.990 fused_ordering(59) 00:12:41.990 fused_ordering(60) 00:12:41.990 fused_ordering(61) 00:12:41.990 fused_ordering(62) 00:12:41.990 fused_ordering(63) 00:12:41.990 fused_ordering(64) 00:12:41.990 fused_ordering(65) 00:12:41.990 fused_ordering(66) 00:12:41.990 fused_ordering(67) 00:12:41.990 fused_ordering(68) 00:12:41.990 fused_ordering(69) 00:12:41.990 fused_ordering(70) 00:12:41.990 fused_ordering(71) 00:12:41.990 fused_ordering(72) 00:12:41.991 fused_ordering(73) 00:12:41.991 fused_ordering(74) 00:12:41.991 fused_ordering(75) 00:12:41.991 fused_ordering(76) 00:12:41.991 fused_ordering(77) 00:12:41.991 fused_ordering(78) 00:12:41.991 fused_ordering(79) 00:12:41.991 fused_ordering(80) 00:12:41.991 fused_ordering(81) 00:12:41.991 fused_ordering(82) 00:12:41.991 fused_ordering(83) 00:12:41.991 fused_ordering(84) 00:12:41.991 fused_ordering(85) 00:12:41.991 fused_ordering(86) 00:12:41.991 fused_ordering(87) 00:12:41.991 fused_ordering(88) 00:12:41.991 fused_ordering(89) 00:12:41.991 fused_ordering(90) 00:12:41.991 fused_ordering(91) 00:12:41.991 fused_ordering(92) 00:12:41.991 fused_ordering(93) 00:12:41.991 fused_ordering(94) 00:12:41.991 fused_ordering(95) 00:12:41.991 fused_ordering(96) 00:12:41.991 fused_ordering(97) 00:12:41.991 fused_ordering(98) 00:12:41.991 fused_ordering(99) 00:12:41.991 fused_ordering(100) 00:12:41.991 fused_ordering(101) 00:12:41.991 fused_ordering(102) 00:12:41.991 fused_ordering(103) 00:12:41.991 fused_ordering(104) 00:12:41.991 fused_ordering(105) 00:12:41.991 fused_ordering(106) 00:12:41.991 fused_ordering(107) 00:12:41.991 fused_ordering(108) 00:12:41.991 fused_ordering(109) 00:12:41.991 fused_ordering(110) 00:12:41.991 fused_ordering(111) 00:12:41.991 fused_ordering(112) 00:12:41.991 fused_ordering(113) 00:12:41.991 fused_ordering(114) 00:12:41.991 fused_ordering(115) 00:12:41.991 fused_ordering(116) 00:12:41.991 fused_ordering(117) 00:12:41.991 fused_ordering(118) 00:12:41.991 fused_ordering(119) 00:12:41.991 fused_ordering(120) 00:12:41.991 fused_ordering(121) 00:12:41.991 fused_ordering(122) 00:12:41.991 fused_ordering(123) 00:12:41.991 fused_ordering(124) 00:12:41.991 fused_ordering(125) 00:12:41.991 fused_ordering(126) 00:12:41.991 fused_ordering(127) 00:12:41.991 fused_ordering(128) 00:12:41.991 fused_ordering(129) 00:12:41.991 fused_ordering(130) 00:12:41.991 fused_ordering(131) 00:12:41.991 fused_ordering(132) 00:12:41.991 fused_ordering(133) 00:12:41.991 fused_ordering(134) 00:12:41.991 fused_ordering(135) 00:12:41.991 fused_ordering(136) 00:12:41.991 fused_ordering(137) 00:12:41.991 fused_ordering(138) 00:12:41.991 fused_ordering(139) 00:12:41.991 fused_ordering(140) 00:12:41.991 fused_ordering(141) 00:12:41.991 fused_ordering(142) 00:12:41.991 fused_ordering(143) 00:12:41.991 fused_ordering(144) 00:12:41.991 fused_ordering(145) 00:12:41.991 fused_ordering(146) 00:12:41.991 fused_ordering(147) 00:12:41.991 fused_ordering(148) 00:12:41.991 fused_ordering(149) 00:12:41.991 fused_ordering(150) 00:12:41.991 fused_ordering(151) 00:12:41.991 fused_ordering(152) 00:12:41.991 fused_ordering(153) 00:12:41.991 fused_ordering(154) 00:12:41.991 fused_ordering(155) 00:12:41.991 fused_ordering(156) 00:12:41.991 fused_ordering(157) 00:12:41.991 fused_ordering(158) 00:12:41.991 fused_ordering(159) 00:12:41.991 fused_ordering(160) 00:12:41.991 fused_ordering(161) 00:12:41.991 fused_ordering(162) 00:12:41.991 fused_ordering(163) 00:12:41.991 fused_ordering(164) 00:12:41.991 fused_ordering(165) 00:12:41.991 fused_ordering(166) 00:12:41.991 fused_ordering(167) 00:12:41.991 fused_ordering(168) 00:12:41.991 fused_ordering(169) 00:12:41.991 fused_ordering(170) 00:12:41.991 fused_ordering(171) 00:12:41.991 fused_ordering(172) 00:12:41.991 fused_ordering(173) 00:12:41.991 fused_ordering(174) 00:12:41.991 fused_ordering(175) 00:12:41.991 fused_ordering(176) 00:12:41.991 fused_ordering(177) 00:12:41.991 fused_ordering(178) 00:12:41.991 fused_ordering(179) 00:12:41.991 fused_ordering(180) 00:12:41.991 fused_ordering(181) 00:12:41.991 fused_ordering(182) 00:12:41.991 fused_ordering(183) 00:12:41.991 fused_ordering(184) 00:12:41.991 fused_ordering(185) 00:12:41.991 fused_ordering(186) 00:12:41.991 fused_ordering(187) 00:12:41.991 fused_ordering(188) 00:12:41.991 fused_ordering(189) 00:12:41.991 fused_ordering(190) 00:12:41.991 fused_ordering(191) 00:12:41.991 fused_ordering(192) 00:12:41.991 fused_ordering(193) 00:12:41.991 fused_ordering(194) 00:12:41.991 fused_ordering(195) 00:12:41.991 fused_ordering(196) 00:12:41.991 fused_ordering(197) 00:12:41.991 fused_ordering(198) 00:12:41.991 fused_ordering(199) 00:12:41.991 fused_ordering(200) 00:12:41.991 fused_ordering(201) 00:12:41.991 fused_ordering(202) 00:12:41.991 fused_ordering(203) 00:12:41.991 fused_ordering(204) 00:12:41.991 fused_ordering(205) 00:12:42.251 fused_ordering(206) 00:12:42.251 fused_ordering(207) 00:12:42.251 fused_ordering(208) 00:12:42.251 fused_ordering(209) 00:12:42.251 fused_ordering(210) 00:12:42.251 fused_ordering(211) 00:12:42.251 fused_ordering(212) 00:12:42.251 fused_ordering(213) 00:12:42.251 fused_ordering(214) 00:12:42.251 fused_ordering(215) 00:12:42.251 fused_ordering(216) 00:12:42.251 fused_ordering(217) 00:12:42.251 fused_ordering(218) 00:12:42.251 fused_ordering(219) 00:12:42.251 fused_ordering(220) 00:12:42.251 fused_ordering(221) 00:12:42.251 fused_ordering(222) 00:12:42.251 fused_ordering(223) 00:12:42.251 fused_ordering(224) 00:12:42.251 fused_ordering(225) 00:12:42.251 fused_ordering(226) 00:12:42.251 fused_ordering(227) 00:12:42.251 fused_ordering(228) 00:12:42.251 fused_ordering(229) 00:12:42.251 fused_ordering(230) 00:12:42.251 fused_ordering(231) 00:12:42.251 fused_ordering(232) 00:12:42.251 fused_ordering(233) 00:12:42.251 fused_ordering(234) 00:12:42.251 fused_ordering(235) 00:12:42.251 fused_ordering(236) 00:12:42.251 fused_ordering(237) 00:12:42.251 fused_ordering(238) 00:12:42.251 fused_ordering(239) 00:12:42.251 fused_ordering(240) 00:12:42.251 fused_ordering(241) 00:12:42.251 fused_ordering(242) 00:12:42.251 fused_ordering(243) 00:12:42.251 fused_ordering(244) 00:12:42.251 fused_ordering(245) 00:12:42.251 fused_ordering(246) 00:12:42.251 fused_ordering(247) 00:12:42.251 fused_ordering(248) 00:12:42.251 fused_ordering(249) 00:12:42.251 fused_ordering(250) 00:12:42.251 fused_ordering(251) 00:12:42.251 fused_ordering(252) 00:12:42.251 fused_ordering(253) 00:12:42.251 fused_ordering(254) 00:12:42.251 fused_ordering(255) 00:12:42.251 fused_ordering(256) 00:12:42.251 fused_ordering(257) 00:12:42.251 fused_ordering(258) 00:12:42.251 fused_ordering(259) 00:12:42.251 fused_ordering(260) 00:12:42.251 fused_ordering(261) 00:12:42.251 fused_ordering(262) 00:12:42.251 fused_ordering(263) 00:12:42.251 fused_ordering(264) 00:12:42.251 fused_ordering(265) 00:12:42.251 fused_ordering(266) 00:12:42.251 fused_ordering(267) 00:12:42.251 fused_ordering(268) 00:12:42.251 fused_ordering(269) 00:12:42.251 fused_ordering(270) 00:12:42.251 fused_ordering(271) 00:12:42.251 fused_ordering(272) 00:12:42.251 fused_ordering(273) 00:12:42.251 fused_ordering(274) 00:12:42.251 fused_ordering(275) 00:12:42.251 fused_ordering(276) 00:12:42.251 fused_ordering(277) 00:12:42.251 fused_ordering(278) 00:12:42.251 fused_ordering(279) 00:12:42.251 fused_ordering(280) 00:12:42.251 fused_ordering(281) 00:12:42.251 fused_ordering(282) 00:12:42.251 fused_ordering(283) 00:12:42.251 fused_ordering(284) 00:12:42.251 fused_ordering(285) 00:12:42.251 fused_ordering(286) 00:12:42.251 fused_ordering(287) 00:12:42.251 fused_ordering(288) 00:12:42.251 fused_ordering(289) 00:12:42.251 fused_ordering(290) 00:12:42.251 fused_ordering(291) 00:12:42.251 fused_ordering(292) 00:12:42.251 fused_ordering(293) 00:12:42.251 fused_ordering(294) 00:12:42.251 fused_ordering(295) 00:12:42.251 fused_ordering(296) 00:12:42.251 fused_ordering(297) 00:12:42.251 fused_ordering(298) 00:12:42.251 fused_ordering(299) 00:12:42.251 fused_ordering(300) 00:12:42.251 fused_ordering(301) 00:12:42.251 fused_ordering(302) 00:12:42.251 fused_ordering(303) 00:12:42.251 fused_ordering(304) 00:12:42.251 fused_ordering(305) 00:12:42.251 fused_ordering(306) 00:12:42.251 fused_ordering(307) 00:12:42.251 fused_ordering(308) 00:12:42.251 fused_ordering(309) 00:12:42.251 fused_ordering(310) 00:12:42.251 fused_ordering(311) 00:12:42.251 fused_ordering(312) 00:12:42.251 fused_ordering(313) 00:12:42.251 fused_ordering(314) 00:12:42.251 fused_ordering(315) 00:12:42.251 fused_ordering(316) 00:12:42.251 fused_ordering(317) 00:12:42.251 fused_ordering(318) 00:12:42.251 fused_ordering(319) 00:12:42.251 fused_ordering(320) 00:12:42.251 fused_ordering(321) 00:12:42.251 fused_ordering(322) 00:12:42.251 fused_ordering(323) 00:12:42.251 fused_ordering(324) 00:12:42.251 fused_ordering(325) 00:12:42.251 fused_ordering(326) 00:12:42.251 fused_ordering(327) 00:12:42.251 fused_ordering(328) 00:12:42.251 fused_ordering(329) 00:12:42.251 fused_ordering(330) 00:12:42.251 fused_ordering(331) 00:12:42.251 fused_ordering(332) 00:12:42.251 fused_ordering(333) 00:12:42.251 fused_ordering(334) 00:12:42.251 fused_ordering(335) 00:12:42.251 fused_ordering(336) 00:12:42.251 fused_ordering(337) 00:12:42.251 fused_ordering(338) 00:12:42.251 fused_ordering(339) 00:12:42.251 fused_ordering(340) 00:12:42.251 fused_ordering(341) 00:12:42.251 fused_ordering(342) 00:12:42.251 fused_ordering(343) 00:12:42.252 fused_ordering(344) 00:12:42.252 fused_ordering(345) 00:12:42.252 fused_ordering(346) 00:12:42.252 fused_ordering(347) 00:12:42.252 fused_ordering(348) 00:12:42.252 fused_ordering(349) 00:12:42.252 fused_ordering(350) 00:12:42.252 fused_ordering(351) 00:12:42.252 fused_ordering(352) 00:12:42.252 fused_ordering(353) 00:12:42.252 fused_ordering(354) 00:12:42.252 fused_ordering(355) 00:12:42.252 fused_ordering(356) 00:12:42.252 fused_ordering(357) 00:12:42.252 fused_ordering(358) 00:12:42.252 fused_ordering(359) 00:12:42.252 fused_ordering(360) 00:12:42.252 fused_ordering(361) 00:12:42.252 fused_ordering(362) 00:12:42.252 fused_ordering(363) 00:12:42.252 fused_ordering(364) 00:12:42.252 fused_ordering(365) 00:12:42.252 fused_ordering(366) 00:12:42.252 fused_ordering(367) 00:12:42.252 fused_ordering(368) 00:12:42.252 fused_ordering(369) 00:12:42.252 fused_ordering(370) 00:12:42.252 fused_ordering(371) 00:12:42.252 fused_ordering(372) 00:12:42.252 fused_ordering(373) 00:12:42.252 fused_ordering(374) 00:12:42.252 fused_ordering(375) 00:12:42.252 fused_ordering(376) 00:12:42.252 fused_ordering(377) 00:12:42.252 fused_ordering(378) 00:12:42.252 fused_ordering(379) 00:12:42.252 fused_ordering(380) 00:12:42.252 fused_ordering(381) 00:12:42.252 fused_ordering(382) 00:12:42.252 fused_ordering(383) 00:12:42.252 fused_ordering(384) 00:12:42.252 fused_ordering(385) 00:12:42.252 fused_ordering(386) 00:12:42.252 fused_ordering(387) 00:12:42.252 fused_ordering(388) 00:12:42.252 fused_ordering(389) 00:12:42.252 fused_ordering(390) 00:12:42.252 fused_ordering(391) 00:12:42.252 fused_ordering(392) 00:12:42.252 fused_ordering(393) 00:12:42.252 fused_ordering(394) 00:12:42.252 fused_ordering(395) 00:12:42.252 fused_ordering(396) 00:12:42.252 fused_ordering(397) 00:12:42.252 fused_ordering(398) 00:12:42.252 fused_ordering(399) 00:12:42.252 fused_ordering(400) 00:12:42.252 fused_ordering(401) 00:12:42.252 fused_ordering(402) 00:12:42.252 fused_ordering(403) 00:12:42.252 fused_ordering(404) 00:12:42.252 fused_ordering(405) 00:12:42.252 fused_ordering(406) 00:12:42.252 fused_ordering(407) 00:12:42.252 fused_ordering(408) 00:12:42.252 fused_ordering(409) 00:12:42.252 fused_ordering(410) 00:12:42.511 fused_ordering(411) 00:12:42.511 fused_ordering(412) 00:12:42.511 fused_ordering(413) 00:12:42.511 fused_ordering(414) 00:12:42.511 fused_ordering(415) 00:12:42.511 fused_ordering(416) 00:12:42.511 fused_ordering(417) 00:12:42.511 fused_ordering(418) 00:12:42.511 fused_ordering(419) 00:12:42.511 fused_ordering(420) 00:12:42.511 fused_ordering(421) 00:12:42.511 fused_ordering(422) 00:12:42.511 fused_ordering(423) 00:12:42.511 fused_ordering(424) 00:12:42.511 fused_ordering(425) 00:12:42.511 fused_ordering(426) 00:12:42.511 fused_ordering(427) 00:12:42.511 fused_ordering(428) 00:12:42.511 fused_ordering(429) 00:12:42.511 fused_ordering(430) 00:12:42.511 fused_ordering(431) 00:12:42.511 fused_ordering(432) 00:12:42.511 fused_ordering(433) 00:12:42.511 fused_ordering(434) 00:12:42.511 fused_ordering(435) 00:12:42.511 fused_ordering(436) 00:12:42.511 fused_ordering(437) 00:12:42.511 fused_ordering(438) 00:12:42.511 fused_ordering(439) 00:12:42.511 fused_ordering(440) 00:12:42.511 fused_ordering(441) 00:12:42.511 fused_ordering(442) 00:12:42.511 fused_ordering(443) 00:12:42.511 fused_ordering(444) 00:12:42.511 fused_ordering(445) 00:12:42.511 fused_ordering(446) 00:12:42.511 fused_ordering(447) 00:12:42.511 fused_ordering(448) 00:12:42.511 fused_ordering(449) 00:12:42.511 fused_ordering(450) 00:12:42.511 fused_ordering(451) 00:12:42.511 fused_ordering(452) 00:12:42.511 fused_ordering(453) 00:12:42.511 fused_ordering(454) 00:12:42.511 fused_ordering(455) 00:12:42.511 fused_ordering(456) 00:12:42.511 fused_ordering(457) 00:12:42.511 fused_ordering(458) 00:12:42.512 fused_ordering(459) 00:12:42.512 fused_ordering(460) 00:12:42.512 fused_ordering(461) 00:12:42.512 fused_ordering(462) 00:12:42.512 fused_ordering(463) 00:12:42.512 fused_ordering(464) 00:12:42.512 fused_ordering(465) 00:12:42.512 fused_ordering(466) 00:12:42.512 fused_ordering(467) 00:12:42.512 fused_ordering(468) 00:12:42.512 fused_ordering(469) 00:12:42.512 fused_ordering(470) 00:12:42.512 fused_ordering(471) 00:12:42.512 fused_ordering(472) 00:12:42.512 fused_ordering(473) 00:12:42.512 fused_ordering(474) 00:12:42.512 fused_ordering(475) 00:12:42.512 fused_ordering(476) 00:12:42.512 fused_ordering(477) 00:12:42.512 fused_ordering(478) 00:12:42.512 fused_ordering(479) 00:12:42.512 fused_ordering(480) 00:12:42.512 fused_ordering(481) 00:12:42.512 fused_ordering(482) 00:12:42.512 fused_ordering(483) 00:12:42.512 fused_ordering(484) 00:12:42.512 fused_ordering(485) 00:12:42.512 fused_ordering(486) 00:12:42.512 fused_ordering(487) 00:12:42.512 fused_ordering(488) 00:12:42.512 fused_ordering(489) 00:12:42.512 fused_ordering(490) 00:12:42.512 fused_ordering(491) 00:12:42.512 fused_ordering(492) 00:12:42.512 fused_ordering(493) 00:12:42.512 fused_ordering(494) 00:12:42.512 fused_ordering(495) 00:12:42.512 fused_ordering(496) 00:12:42.512 fused_ordering(497) 00:12:42.512 fused_ordering(498) 00:12:42.512 fused_ordering(499) 00:12:42.512 fused_ordering(500) 00:12:42.512 fused_ordering(501) 00:12:42.512 fused_ordering(502) 00:12:42.512 fused_ordering(503) 00:12:42.512 fused_ordering(504) 00:12:42.512 fused_ordering(505) 00:12:42.512 fused_ordering(506) 00:12:42.512 fused_ordering(507) 00:12:42.512 fused_ordering(508) 00:12:42.512 fused_ordering(509) 00:12:42.512 fused_ordering(510) 00:12:42.512 fused_ordering(511) 00:12:42.512 fused_ordering(512) 00:12:42.512 fused_ordering(513) 00:12:42.512 fused_ordering(514) 00:12:42.512 fused_ordering(515) 00:12:42.512 fused_ordering(516) 00:12:42.512 fused_ordering(517) 00:12:42.512 fused_ordering(518) 00:12:42.512 fused_ordering(519) 00:12:42.512 fused_ordering(520) 00:12:42.512 fused_ordering(521) 00:12:42.512 fused_ordering(522) 00:12:42.512 fused_ordering(523) 00:12:42.512 fused_ordering(524) 00:12:42.512 fused_ordering(525) 00:12:42.512 fused_ordering(526) 00:12:42.512 fused_ordering(527) 00:12:42.512 fused_ordering(528) 00:12:42.512 fused_ordering(529) 00:12:42.512 fused_ordering(530) 00:12:42.512 fused_ordering(531) 00:12:42.512 fused_ordering(532) 00:12:42.512 fused_ordering(533) 00:12:42.512 fused_ordering(534) 00:12:42.512 fused_ordering(535) 00:12:42.512 fused_ordering(536) 00:12:42.512 fused_ordering(537) 00:12:42.512 fused_ordering(538) 00:12:42.512 fused_ordering(539) 00:12:42.512 fused_ordering(540) 00:12:42.512 fused_ordering(541) 00:12:42.512 fused_ordering(542) 00:12:42.512 fused_ordering(543) 00:12:42.512 fused_ordering(544) 00:12:42.512 fused_ordering(545) 00:12:42.512 fused_ordering(546) 00:12:42.512 fused_ordering(547) 00:12:42.512 fused_ordering(548) 00:12:42.512 fused_ordering(549) 00:12:42.512 fused_ordering(550) 00:12:42.512 fused_ordering(551) 00:12:42.512 fused_ordering(552) 00:12:42.512 fused_ordering(553) 00:12:42.512 fused_ordering(554) 00:12:42.512 fused_ordering(555) 00:12:42.512 fused_ordering(556) 00:12:42.512 fused_ordering(557) 00:12:42.512 fused_ordering(558) 00:12:42.512 fused_ordering(559) 00:12:42.512 fused_ordering(560) 00:12:42.512 fused_ordering(561) 00:12:42.512 fused_ordering(562) 00:12:42.512 fused_ordering(563) 00:12:42.512 fused_ordering(564) 00:12:42.512 fused_ordering(565) 00:12:42.512 fused_ordering(566) 00:12:42.512 fused_ordering(567) 00:12:42.512 fused_ordering(568) 00:12:42.512 fused_ordering(569) 00:12:42.512 fused_ordering(570) 00:12:42.512 fused_ordering(571) 00:12:42.512 fused_ordering(572) 00:12:42.512 fused_ordering(573) 00:12:42.512 fused_ordering(574) 00:12:42.512 fused_ordering(575) 00:12:42.512 fused_ordering(576) 00:12:42.512 fused_ordering(577) 00:12:42.512 fused_ordering(578) 00:12:42.512 fused_ordering(579) 00:12:42.512 fused_ordering(580) 00:12:42.512 fused_ordering(581) 00:12:42.512 fused_ordering(582) 00:12:42.512 fused_ordering(583) 00:12:42.512 fused_ordering(584) 00:12:42.512 fused_ordering(585) 00:12:42.512 fused_ordering(586) 00:12:42.512 fused_ordering(587) 00:12:42.512 fused_ordering(588) 00:12:42.512 fused_ordering(589) 00:12:42.512 fused_ordering(590) 00:12:42.512 fused_ordering(591) 00:12:42.512 fused_ordering(592) 00:12:42.512 fused_ordering(593) 00:12:42.512 fused_ordering(594) 00:12:42.512 fused_ordering(595) 00:12:42.512 fused_ordering(596) 00:12:42.512 fused_ordering(597) 00:12:42.512 fused_ordering(598) 00:12:42.512 fused_ordering(599) 00:12:42.512 fused_ordering(600) 00:12:42.512 fused_ordering(601) 00:12:42.512 fused_ordering(602) 00:12:42.512 fused_ordering(603) 00:12:42.512 fused_ordering(604) 00:12:42.512 fused_ordering(605) 00:12:42.512 fused_ordering(606) 00:12:42.512 fused_ordering(607) 00:12:42.512 fused_ordering(608) 00:12:42.512 fused_ordering(609) 00:12:42.512 fused_ordering(610) 00:12:42.512 fused_ordering(611) 00:12:42.512 fused_ordering(612) 00:12:42.512 fused_ordering(613) 00:12:42.512 fused_ordering(614) 00:12:42.512 fused_ordering(615) 00:12:42.772 fused_ordering(616) 00:12:42.772 fused_ordering(617) 00:12:42.772 fused_ordering(618) 00:12:42.772 fused_ordering(619) 00:12:42.772 fused_ordering(620) 00:12:42.772 fused_ordering(621) 00:12:42.772 fused_ordering(622) 00:12:42.772 fused_ordering(623) 00:12:42.772 fused_ordering(624) 00:12:42.772 fused_ordering(625) 00:12:42.772 fused_ordering(626) 00:12:42.772 fused_ordering(627) 00:12:42.772 fused_ordering(628) 00:12:42.772 fused_ordering(629) 00:12:42.772 fused_ordering(630) 00:12:42.772 fused_ordering(631) 00:12:42.772 fused_ordering(632) 00:12:42.772 fused_ordering(633) 00:12:42.772 fused_ordering(634) 00:12:42.772 fused_ordering(635) 00:12:42.772 fused_ordering(636) 00:12:42.772 fused_ordering(637) 00:12:42.772 fused_ordering(638) 00:12:42.772 fused_ordering(639) 00:12:42.772 fused_ordering(640) 00:12:42.772 fused_ordering(641) 00:12:42.772 fused_ordering(642) 00:12:42.772 fused_ordering(643) 00:12:42.772 fused_ordering(644) 00:12:42.772 fused_ordering(645) 00:12:42.772 fused_ordering(646) 00:12:42.772 fused_ordering(647) 00:12:42.772 fused_ordering(648) 00:12:42.772 fused_ordering(649) 00:12:42.772 fused_ordering(650) 00:12:42.772 fused_ordering(651) 00:12:42.772 fused_ordering(652) 00:12:42.772 fused_ordering(653) 00:12:42.772 fused_ordering(654) 00:12:42.772 fused_ordering(655) 00:12:42.772 fused_ordering(656) 00:12:42.772 fused_ordering(657) 00:12:42.772 fused_ordering(658) 00:12:42.772 fused_ordering(659) 00:12:42.772 fused_ordering(660) 00:12:42.772 fused_ordering(661) 00:12:42.772 fused_ordering(662) 00:12:42.772 fused_ordering(663) 00:12:42.772 fused_ordering(664) 00:12:42.772 fused_ordering(665) 00:12:42.772 fused_ordering(666) 00:12:42.772 fused_ordering(667) 00:12:42.772 fused_ordering(668) 00:12:42.772 fused_ordering(669) 00:12:42.772 fused_ordering(670) 00:12:42.772 fused_ordering(671) 00:12:42.772 fused_ordering(672) 00:12:42.772 fused_ordering(673) 00:12:42.772 fused_ordering(674) 00:12:42.772 fused_ordering(675) 00:12:42.772 fused_ordering(676) 00:12:42.772 fused_ordering(677) 00:12:42.772 fused_ordering(678) 00:12:42.772 fused_ordering(679) 00:12:42.772 fused_ordering(680) 00:12:42.772 fused_ordering(681) 00:12:42.772 fused_ordering(682) 00:12:42.772 fused_ordering(683) 00:12:42.772 fused_ordering(684) 00:12:42.772 fused_ordering(685) 00:12:42.772 fused_ordering(686) 00:12:42.772 fused_ordering(687) 00:12:42.772 fused_ordering(688) 00:12:42.772 fused_ordering(689) 00:12:42.772 fused_ordering(690) 00:12:42.772 fused_ordering(691) 00:12:42.772 fused_ordering(692) 00:12:42.772 fused_ordering(693) 00:12:42.772 fused_ordering(694) 00:12:42.772 fused_ordering(695) 00:12:42.772 fused_ordering(696) 00:12:42.772 fused_ordering(697) 00:12:42.772 fused_ordering(698) 00:12:42.772 fused_ordering(699) 00:12:42.772 fused_ordering(700) 00:12:42.772 fused_ordering(701) 00:12:42.772 fused_ordering(702) 00:12:42.772 fused_ordering(703) 00:12:42.772 fused_ordering(704) 00:12:42.772 fused_ordering(705) 00:12:42.772 fused_ordering(706) 00:12:42.772 fused_ordering(707) 00:12:42.772 fused_ordering(708) 00:12:42.772 fused_ordering(709) 00:12:42.772 fused_ordering(710) 00:12:42.772 fused_ordering(711) 00:12:42.772 fused_ordering(712) 00:12:42.772 fused_ordering(713) 00:12:42.772 fused_ordering(714) 00:12:42.772 fused_ordering(715) 00:12:42.772 fused_ordering(716) 00:12:42.772 fused_ordering(717) 00:12:42.772 fused_ordering(718) 00:12:42.772 fused_ordering(719) 00:12:42.772 fused_ordering(720) 00:12:42.772 fused_ordering(721) 00:12:42.772 fused_ordering(722) 00:12:42.772 fused_ordering(723) 00:12:42.772 fused_ordering(724) 00:12:42.772 fused_ordering(725) 00:12:42.772 fused_ordering(726) 00:12:42.772 fused_ordering(727) 00:12:42.772 fused_ordering(728) 00:12:42.772 fused_ordering(729) 00:12:42.772 fused_ordering(730) 00:12:42.772 fused_ordering(731) 00:12:42.772 fused_ordering(732) 00:12:42.772 fused_ordering(733) 00:12:42.772 fused_ordering(734) 00:12:42.772 fused_ordering(735) 00:12:42.772 fused_ordering(736) 00:12:42.772 fused_ordering(737) 00:12:42.772 fused_ordering(738) 00:12:42.772 fused_ordering(739) 00:12:42.772 fused_ordering(740) 00:12:42.772 fused_ordering(741) 00:12:42.772 fused_ordering(742) 00:12:42.772 fused_ordering(743) 00:12:42.772 fused_ordering(744) 00:12:42.772 fused_ordering(745) 00:12:42.772 fused_ordering(746) 00:12:42.772 fused_ordering(747) 00:12:42.772 fused_ordering(748) 00:12:42.772 fused_ordering(749) 00:12:42.772 fused_ordering(750) 00:12:42.772 fused_ordering(751) 00:12:42.772 fused_ordering(752) 00:12:42.772 fused_ordering(753) 00:12:42.772 fused_ordering(754) 00:12:42.772 fused_ordering(755) 00:12:42.772 fused_ordering(756) 00:12:42.772 fused_ordering(757) 00:12:42.772 fused_ordering(758) 00:12:42.772 fused_ordering(759) 00:12:42.772 fused_ordering(760) 00:12:42.772 fused_ordering(761) 00:12:42.772 fused_ordering(762) 00:12:42.772 fused_ordering(763) 00:12:42.772 fused_ordering(764) 00:12:42.772 fused_ordering(765) 00:12:42.772 fused_ordering(766) 00:12:42.772 fused_ordering(767) 00:12:42.772 fused_ordering(768) 00:12:42.772 fused_ordering(769) 00:12:42.772 fused_ordering(770) 00:12:42.772 fused_ordering(771) 00:12:42.772 fused_ordering(772) 00:12:42.772 fused_ordering(773) 00:12:42.772 fused_ordering(774) 00:12:42.772 fused_ordering(775) 00:12:42.772 fused_ordering(776) 00:12:42.772 fused_ordering(777) 00:12:42.772 fused_ordering(778) 00:12:42.772 fused_ordering(779) 00:12:42.772 fused_ordering(780) 00:12:42.772 fused_ordering(781) 00:12:42.772 fused_ordering(782) 00:12:42.772 fused_ordering(783) 00:12:42.772 fused_ordering(784) 00:12:42.772 fused_ordering(785) 00:12:42.772 fused_ordering(786) 00:12:42.772 fused_ordering(787) 00:12:42.772 fused_ordering(788) 00:12:42.772 fused_ordering(789) 00:12:42.772 fused_ordering(790) 00:12:42.772 fused_ordering(791) 00:12:42.772 fused_ordering(792) 00:12:42.772 fused_ordering(793) 00:12:42.772 fused_ordering(794) 00:12:42.772 fused_ordering(795) 00:12:42.772 fused_ordering(796) 00:12:42.772 fused_ordering(797) 00:12:42.772 fused_ordering(798) 00:12:42.772 fused_ordering(799) 00:12:42.772 fused_ordering(800) 00:12:42.772 fused_ordering(801) 00:12:42.772 fused_ordering(802) 00:12:42.772 fused_ordering(803) 00:12:42.772 fused_ordering(804) 00:12:42.772 fused_ordering(805) 00:12:42.772 fused_ordering(806) 00:12:42.772 fused_ordering(807) 00:12:42.772 fused_ordering(808) 00:12:42.772 fused_ordering(809) 00:12:42.772 fused_ordering(810) 00:12:42.772 fused_ordering(811) 00:12:42.772 fused_ordering(812) 00:12:42.772 fused_ordering(813) 00:12:42.772 fused_ordering(814) 00:12:42.772 fused_ordering(815) 00:12:42.772 fused_ordering(816) 00:12:42.772 fused_ordering(817) 00:12:42.772 fused_ordering(818) 00:12:42.772 fused_ordering(819) 00:12:42.772 fused_ordering(820) 00:12:43.342 fused_ordering(821) 00:12:43.342 fused_ordering(822) 00:12:43.342 fused_ordering(823) 00:12:43.342 fused_ordering(824) 00:12:43.342 fused_ordering(825) 00:12:43.342 fused_ordering(826) 00:12:43.342 fused_ordering(827) 00:12:43.342 fused_ordering(828) 00:12:43.342 fused_ordering(829) 00:12:43.342 fused_ordering(830) 00:12:43.342 fused_ordering(831) 00:12:43.342 fused_ordering(832) 00:12:43.342 fused_ordering(833) 00:12:43.342 fused_ordering(834) 00:12:43.342 fused_ordering(835) 00:12:43.342 fused_ordering(836) 00:12:43.342 fused_ordering(837) 00:12:43.342 fused_ordering(838) 00:12:43.342 fused_ordering(839) 00:12:43.342 fused_ordering(840) 00:12:43.342 fused_ordering(841) 00:12:43.342 fused_ordering(842) 00:12:43.342 fused_ordering(843) 00:12:43.342 fused_ordering(844) 00:12:43.342 fused_ordering(845) 00:12:43.342 fused_ordering(846) 00:12:43.343 fused_ordering(847) 00:12:43.343 fused_ordering(848) 00:12:43.343 fused_ordering(849) 00:12:43.343 fused_ordering(850) 00:12:43.343 fused_ordering(851) 00:12:43.343 fused_ordering(852) 00:12:43.343 fused_ordering(853) 00:12:43.343 fused_ordering(854) 00:12:43.343 fused_ordering(855) 00:12:43.343 fused_ordering(856) 00:12:43.343 fused_ordering(857) 00:12:43.343 fused_ordering(858) 00:12:43.343 fused_ordering(859) 00:12:43.343 fused_ordering(860) 00:12:43.343 fused_ordering(861) 00:12:43.343 fused_ordering(862) 00:12:43.343 fused_ordering(863) 00:12:43.343 fused_ordering(864) 00:12:43.343 fused_ordering(865) 00:12:43.343 fused_ordering(866) 00:12:43.343 fused_ordering(867) 00:12:43.343 fused_ordering(868) 00:12:43.343 fused_ordering(869) 00:12:43.343 fused_ordering(870) 00:12:43.343 fused_ordering(871) 00:12:43.343 fused_ordering(872) 00:12:43.343 fused_ordering(873) 00:12:43.343 fused_ordering(874) 00:12:43.343 fused_ordering(875) 00:12:43.343 fused_ordering(876) 00:12:43.343 fused_ordering(877) 00:12:43.343 fused_ordering(878) 00:12:43.343 fused_ordering(879) 00:12:43.343 fused_ordering(880) 00:12:43.343 fused_ordering(881) 00:12:43.343 fused_ordering(882) 00:12:43.343 fused_ordering(883) 00:12:43.343 fused_ordering(884) 00:12:43.343 fused_ordering(885) 00:12:43.343 fused_ordering(886) 00:12:43.343 fused_ordering(887) 00:12:43.343 fused_ordering(888) 00:12:43.343 fused_ordering(889) 00:12:43.343 fused_ordering(890) 00:12:43.343 fused_ordering(891) 00:12:43.343 fused_ordering(892) 00:12:43.343 fused_ordering(893) 00:12:43.343 fused_ordering(894) 00:12:43.343 fused_ordering(895) 00:12:43.343 fused_ordering(896) 00:12:43.343 fused_ordering(897) 00:12:43.343 fused_ordering(898) 00:12:43.343 fused_ordering(899) 00:12:43.343 fused_ordering(900) 00:12:43.343 fused_ordering(901) 00:12:43.343 fused_ordering(902) 00:12:43.343 fused_ordering(903) 00:12:43.343 fused_ordering(904) 00:12:43.343 fused_ordering(905) 00:12:43.343 fused_ordering(906) 00:12:43.343 fused_ordering(907) 00:12:43.343 fused_ordering(908) 00:12:43.343 fused_ordering(909) 00:12:43.343 fused_ordering(910) 00:12:43.343 fused_ordering(911) 00:12:43.343 fused_ordering(912) 00:12:43.343 fused_ordering(913) 00:12:43.343 fused_ordering(914) 00:12:43.343 fused_ordering(915) 00:12:43.343 fused_ordering(916) 00:12:43.343 fused_ordering(917) 00:12:43.343 fused_ordering(918) 00:12:43.343 fused_ordering(919) 00:12:43.343 fused_ordering(920) 00:12:43.343 fused_ordering(921) 00:12:43.343 fused_ordering(922) 00:12:43.343 fused_ordering(923) 00:12:43.343 fused_ordering(924) 00:12:43.343 fused_ordering(925) 00:12:43.343 fused_ordering(926) 00:12:43.343 fused_ordering(927) 00:12:43.343 fused_ordering(928) 00:12:43.343 fused_ordering(929) 00:12:43.343 fused_ordering(930) 00:12:43.343 fused_ordering(931) 00:12:43.343 fused_ordering(932) 00:12:43.343 fused_ordering(933) 00:12:43.343 fused_ordering(934) 00:12:43.343 fused_ordering(935) 00:12:43.343 fused_ordering(936) 00:12:43.343 fused_ordering(937) 00:12:43.343 fused_ordering(938) 00:12:43.343 fused_ordering(939) 00:12:43.343 fused_ordering(940) 00:12:43.343 fused_ordering(941) 00:12:43.343 fused_ordering(942) 00:12:43.343 fused_ordering(943) 00:12:43.343 fused_ordering(944) 00:12:43.343 fused_ordering(945) 00:12:43.343 fused_ordering(946) 00:12:43.343 fused_ordering(947) 00:12:43.343 fused_ordering(948) 00:12:43.343 fused_ordering(949) 00:12:43.343 fused_ordering(950) 00:12:43.343 fused_ordering(951) 00:12:43.343 fused_ordering(952) 00:12:43.343 fused_ordering(953) 00:12:43.343 fused_ordering(954) 00:12:43.343 fused_ordering(955) 00:12:43.343 fused_ordering(956) 00:12:43.343 fused_ordering(957) 00:12:43.343 fused_ordering(958) 00:12:43.343 fused_ordering(959) 00:12:43.343 fused_ordering(960) 00:12:43.343 fused_ordering(961) 00:12:43.343 fused_ordering(962) 00:12:43.343 fused_ordering(963) 00:12:43.343 fused_ordering(964) 00:12:43.343 fused_ordering(965) 00:12:43.343 fused_ordering(966) 00:12:43.343 fused_ordering(967) 00:12:43.343 fused_ordering(968) 00:12:43.343 fused_ordering(969) 00:12:43.343 fused_ordering(970) 00:12:43.343 fused_ordering(971) 00:12:43.343 fused_ordering(972) 00:12:43.343 fused_ordering(973) 00:12:43.343 fused_ordering(974) 00:12:43.343 fused_ordering(975) 00:12:43.343 fused_ordering(976) 00:12:43.343 fused_ordering(977) 00:12:43.343 fused_ordering(978) 00:12:43.343 fused_ordering(979) 00:12:43.343 fused_ordering(980) 00:12:43.343 fused_ordering(981) 00:12:43.343 fused_ordering(982) 00:12:43.343 fused_ordering(983) 00:12:43.343 fused_ordering(984) 00:12:43.343 fused_ordering(985) 00:12:43.343 fused_ordering(986) 00:12:43.343 fused_ordering(987) 00:12:43.343 fused_ordering(988) 00:12:43.343 fused_ordering(989) 00:12:43.343 fused_ordering(990) 00:12:43.343 fused_ordering(991) 00:12:43.343 fused_ordering(992) 00:12:43.343 fused_ordering(993) 00:12:43.343 fused_ordering(994) 00:12:43.343 fused_ordering(995) 00:12:43.343 fused_ordering(996) 00:12:43.343 fused_ordering(997) 00:12:43.343 fused_ordering(998) 00:12:43.343 fused_ordering(999) 00:12:43.343 fused_ordering(1000) 00:12:43.343 fused_ordering(1001) 00:12:43.343 fused_ordering(1002) 00:12:43.343 fused_ordering(1003) 00:12:43.343 fused_ordering(1004) 00:12:43.343 fused_ordering(1005) 00:12:43.343 fused_ordering(1006) 00:12:43.343 fused_ordering(1007) 00:12:43.343 fused_ordering(1008) 00:12:43.343 fused_ordering(1009) 00:12:43.343 fused_ordering(1010) 00:12:43.343 fused_ordering(1011) 00:12:43.343 fused_ordering(1012) 00:12:43.343 fused_ordering(1013) 00:12:43.343 fused_ordering(1014) 00:12:43.343 fused_ordering(1015) 00:12:43.343 fused_ordering(1016) 00:12:43.343 fused_ordering(1017) 00:12:43.343 fused_ordering(1018) 00:12:43.343 fused_ordering(1019) 00:12:43.343 fused_ordering(1020) 00:12:43.343 fused_ordering(1021) 00:12:43.343 fused_ordering(1022) 00:12:43.343 fused_ordering(1023) 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.343 rmmod nvme_tcp 00:12:43.343 rmmod nvme_fabrics 00:12:43.343 rmmod nvme_keyring 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2862287 ']' 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2862287 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2862287 ']' 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2862287 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2862287 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2862287' 00:12:43.343 killing process with pid 2862287 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2862287 00:12:43.343 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2862287 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.603 09:45:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.142 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:46.142 00:12:46.142 real 0m10.810s 00:12:46.142 user 0m5.161s 00:12:46.142 sys 0m5.877s 00:12:46.142 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.142 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:46.142 ************************************ 00:12:46.142 END TEST nvmf_fused_ordering 00:12:46.142 ************************************ 00:12:46.142 09:45:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:46.142 09:45:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:46.142 09:45:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.142 09:45:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.142 ************************************ 00:12:46.142 START TEST nvmf_ns_masking 00:12:46.142 ************************************ 00:12:46.142 09:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:46.142 * Looking for test storage... 00:12:46.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # lcov --version 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:12:46.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.142 --rc genhtml_branch_coverage=1 00:12:46.142 --rc genhtml_function_coverage=1 00:12:46.142 --rc genhtml_legend=1 00:12:46.142 --rc geninfo_all_blocks=1 00:12:46.142 --rc geninfo_unexecuted_blocks=1 00:12:46.142 00:12:46.142 ' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:12:46.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.142 --rc genhtml_branch_coverage=1 00:12:46.142 --rc genhtml_function_coverage=1 00:12:46.142 --rc genhtml_legend=1 00:12:46.142 --rc geninfo_all_blocks=1 00:12:46.142 --rc geninfo_unexecuted_blocks=1 00:12:46.142 00:12:46.142 ' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:12:46.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.142 --rc genhtml_branch_coverage=1 00:12:46.142 --rc genhtml_function_coverage=1 00:12:46.142 --rc genhtml_legend=1 00:12:46.142 --rc geninfo_all_blocks=1 00:12:46.142 --rc geninfo_unexecuted_blocks=1 00:12:46.142 00:12:46.142 ' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:12:46.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.142 --rc genhtml_branch_coverage=1 00:12:46.142 --rc genhtml_function_coverage=1 00:12:46.142 --rc genhtml_legend=1 00:12:46.142 --rc geninfo_all_blocks=1 00:12:46.142 --rc geninfo_unexecuted_blocks=1 00:12:46.142 00:12:46.142 ' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.142 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c86679b3-d392-4917-8273-ce209117664e 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e6a52ad6-22be-4cc2-b986-5e12c9bd460f 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4fd7d4d9-5a37-4b23-81ca-8ab7789eb904 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.143 09:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:52.713 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.713 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:52.714 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:52.714 Found net devices under 0000:86:00.0: cvl_0_0 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:52.714 Found net devices under 0000:86:00.1: cvl_0_1 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.714 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:12:52.714 00:12:52.714 --- 10.0.0.2 ping statistics --- 00:12:52.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.714 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:12:52.714 00:12:52.714 --- 10.0.0.1 ping statistics --- 00:12:52.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.714 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2866596 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2866596 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2866596 ']' 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:52.714 [2024-11-20 09:45:15.181448] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:12:52.714 [2024-11-20 09:45:15.181499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.714 [2024-11-20 09:45:15.262626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.714 [2024-11-20 09:45:15.303826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.714 [2024-11-20 09:45:15.303864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.714 [2024-11-20 09:45:15.303871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.714 [2024-11-20 09:45:15.303878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.714 [2024-11-20 09:45:15.303883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.714 [2024-11-20 09:45:15.304433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:52.714 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.715 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:52.715 [2024-11-20 09:45:15.608405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.715 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:52.715 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:52.715 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:52.715 Malloc1 00:12:52.715 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:52.715 Malloc2 00:12:52.715 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.974 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:53.233 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.492 [2024-11-20 09:45:16.580057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.492 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:53.492 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4fd7d4d9-5a37-4b23-81ca-8ab7789eb904 -a 10.0.0.2 -s 4420 -i 4 00:12:53.492 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.492 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:53.492 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.492 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:53.492 09:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:55.399 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:55.399 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:55.399 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:55.658 [ 0]:0x1 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4af061738ff94adcb9a45a96d39b8dd4 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4af061738ff94adcb9a45a96d39b8dd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.658 09:45:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:55.917 [ 0]:0x1 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4af061738ff94adcb9a45a96d39b8dd4 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4af061738ff94adcb9a45a96d39b8dd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:55.917 [ 1]:0x2 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a6630023c3a4b4cb690e2c9307be533 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a6630023c3a4b4cb690e2c9307be533 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:55.917 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.176 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.435 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:56.693 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:56.693 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4fd7d4d9-5a37-4b23-81ca-8ab7789eb904 -a 10.0.0.2 -s 4420 -i 4 00:12:56.693 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:56.693 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.693 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.693 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:56.693 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:56.693 09:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.705 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.705 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.705 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.705 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.705 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.705 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:58.705 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:58.705 09:45:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:58.963 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:58.963 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.964 [ 0]:0x2 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.964 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a6630023c3a4b4cb690e2c9307be533 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a6630023c3a4b4cb690e2c9307be533 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.224 [ 0]:0x1 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4af061738ff94adcb9a45a96d39b8dd4 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4af061738ff94adcb9a45a96d39b8dd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:59.224 [ 1]:0x2 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:59.224 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a6630023c3a4b4cb690e2c9307be533 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a6630023c3a4b4cb690e2c9307be533 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:59.483 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:59.742 [ 0]:0x2 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a6630023c3a4b4cb690e2c9307be533 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a6630023c3a4b4cb690e2c9307be533 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.742 09:45:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:00.001 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:00.001 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4fd7d4d9-5a37-4b23-81ca-8ab7789eb904 -a 10.0.0.2 -s 4420 -i 4 00:13:00.259 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:00.259 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:00.259 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.259 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:00.259 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:00.259 09:45:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:02.162 [ 0]:0x1 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4af061738ff94adcb9a45a96d39b8dd4 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4af061738ff94adcb9a45a96d39b8dd4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:02.162 [ 1]:0x2 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:02.162 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a6630023c3a4b4cb690e2c9307be533 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a6630023c3a4b4cb690e2c9307be533 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:02.422 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.682 [ 0]:0x2 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a6630023c3a4b4cb690e2c9307be533 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a6630023c3a4b4cb690e2c9307be533 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:02.682 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:02.682 [2024-11-20 09:45:25.995084] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:02.682 request: 00:13:02.682 { 00:13:02.682 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.682 "nsid": 2, 00:13:02.682 "host": "nqn.2016-06.io.spdk:host1", 00:13:02.682 "method": "nvmf_ns_remove_host", 00:13:02.682 "req_id": 1 00:13:02.682 } 00:13:02.682 Got JSON-RPC error response 00:13:02.682 response: 00:13:02.682 { 00:13:02.682 "code": -32602, 00:13:02.682 "message": "Invalid parameters" 00:13:02.682 } 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:02.942 [ 0]:0x2 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8a6630023c3a4b4cb690e2c9307be533 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8a6630023c3a4b4cb690e2c9307be533 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2868467 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2868467 /var/tmp/host.sock 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2868467 ']' 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:02.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.942 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:02.942 [2024-11-20 09:45:26.233099] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:13:02.942 [2024-11-20 09:45:26.233145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868467 ] 00:13:03.201 [2024-11-20 09:45:26.308443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.201 [2024-11-20 09:45:26.349656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.459 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.459 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:03.459 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.459 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:03.718 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c86679b3-d392-4917-8273-ce209117664e 00:13:03.718 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:03.718 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C86679B3D39249178273CE209117664E -i 00:13:03.977 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e6a52ad6-22be-4cc2-b986-5e12c9bd460f 00:13:03.977 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:03.977 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E6A52AD622BE4CC2B9865E12C9BD460F -i 00:13:04.235 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:04.235 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:04.494 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:04.494 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:05.061 nvme0n1 00:13:05.061 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:05.061 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:05.320 nvme1n2 00:13:05.320 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:05.320 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:05.320 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:05.320 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:05.320 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:05.578 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:05.578 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:05.578 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:05.578 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:05.578 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c86679b3-d392-4917-8273-ce209117664e == \c\8\6\6\7\9\b\3\-\d\3\9\2\-\4\9\1\7\-\8\2\7\3\-\c\e\2\0\9\1\1\7\6\6\4\e ]] 00:13:05.578 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:05.578 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:05.578 09:45:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:05.837 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e6a52ad6-22be-4cc2-b986-5e12c9bd460f == \e\6\a\5\2\a\d\6\-\2\2\b\e\-\4\c\c\2\-\b\9\8\6\-\5\e\1\2\c\9\b\d\4\6\0\f ]] 00:13:05.837 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.094 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c86679b3-d392-4917-8273-ce209117664e 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C86679B3D39249178273CE209117664E 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C86679B3D39249178273CE209117664E 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C86679B3D39249178273CE209117664E 00:13:06.353 [2024-11-20 09:45:29.641160] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:06.353 [2024-11-20 09:45:29.641190] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:06.353 [2024-11-20 09:45:29.641198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.353 request: 00:13:06.353 { 00:13:06.353 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.353 "namespace": { 00:13:06.353 "bdev_name": "invalid", 00:13:06.353 "nsid": 1, 00:13:06.353 "nguid": "C86679B3D39249178273CE209117664E", 00:13:06.353 "no_auto_visible": false 00:13:06.353 }, 00:13:06.353 "method": "nvmf_subsystem_add_ns", 00:13:06.353 "req_id": 1 00:13:06.353 } 00:13:06.353 Got JSON-RPC error response 00:13:06.353 response: 00:13:06.353 { 00:13:06.353 "code": -32602, 00:13:06.353 "message": "Invalid parameters" 00:13:06.353 } 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c86679b3-d392-4917-8273-ce209117664e 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:06.353 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C86679B3D39249178273CE209117664E -i 00:13:06.612 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:09.147 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:09.147 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:09.147 09:45:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2868467 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2868467 ']' 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2868467 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2868467 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2868467' 00:13:09.147 killing process with pid 2868467 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2868467 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2868467 00:13:09.147 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.406 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:09.406 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:09.406 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:09.406 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:09.406 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.406 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:09.406 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.406 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.406 rmmod nvme_tcp 00:13:09.406 rmmod nvme_fabrics 00:13:09.406 rmmod nvme_keyring 00:13:09.406 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.407 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:09.407 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:09.407 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2866596 ']' 00:13:09.407 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2866596 00:13:09.407 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2866596 ']' 00:13:09.407 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2866596 00:13:09.407 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:09.407 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.407 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2866596 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2866596' 00:13:09.666 killing process with pid 2866596 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2866596 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2866596 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.666 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:12.203 00:13:12.203 real 0m26.091s 00:13:12.203 user 0m31.267s 00:13:12.203 sys 0m7.044s 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.203 ************************************ 00:13:12.203 END TEST nvmf_ns_masking 00:13:12.203 ************************************ 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:12.203 ************************************ 00:13:12.203 START TEST nvmf_nvme_cli 00:13:12.203 ************************************ 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:12.203 * Looking for test storage... 00:13:12.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # lcov --version 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.203 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:13:12.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.204 --rc genhtml_branch_coverage=1 00:13:12.204 --rc genhtml_function_coverage=1 00:13:12.204 --rc genhtml_legend=1 00:13:12.204 --rc geninfo_all_blocks=1 00:13:12.204 --rc geninfo_unexecuted_blocks=1 00:13:12.204 00:13:12.204 ' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:13:12.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.204 --rc genhtml_branch_coverage=1 00:13:12.204 --rc genhtml_function_coverage=1 00:13:12.204 --rc genhtml_legend=1 00:13:12.204 --rc geninfo_all_blocks=1 00:13:12.204 --rc geninfo_unexecuted_blocks=1 00:13:12.204 00:13:12.204 ' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:13:12.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.204 --rc genhtml_branch_coverage=1 00:13:12.204 --rc genhtml_function_coverage=1 00:13:12.204 --rc genhtml_legend=1 00:13:12.204 --rc geninfo_all_blocks=1 00:13:12.204 --rc geninfo_unexecuted_blocks=1 00:13:12.204 00:13:12.204 ' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:13:12.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.204 --rc genhtml_branch_coverage=1 00:13:12.204 --rc genhtml_function_coverage=1 00:13:12.204 --rc genhtml_legend=1 00:13:12.204 --rc geninfo_all_blocks=1 00:13:12.204 --rc geninfo_unexecuted_blocks=1 00:13:12.204 00:13:12.204 ' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:12.204 09:45:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:18.776 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:18.776 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:18.776 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:18.777 Found net devices under 0000:86:00.0: cvl_0_0 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:18.777 Found net devices under 0000:86:00.1: cvl_0_1 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:18.777 09:45:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:18.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:13:18.777 00:13:18.777 --- 10.0.0.2 ping statistics --- 00:13:18.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.777 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:13:18.777 00:13:18.777 --- 10.0.0.1 ping statistics --- 00:13:18.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.777 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2873182 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2873182 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2873182 ']' 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.777 09:45:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:18.777 [2024-11-20 09:45:41.317029] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:13:18.777 [2024-11-20 09:45:41.317070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.777 [2024-11-20 09:45:41.394620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.777 [2024-11-20 09:45:41.436698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.777 [2024-11-20 09:45:41.436737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.777 [2024-11-20 09:45:41.436744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.777 [2024-11-20 09:45:41.436750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.777 [2024-11-20 09:45:41.436755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.777 [2024-11-20 09:45:41.438387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.777 [2024-11-20 09:45:41.438494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.777 [2024-11-20 09:45:41.438576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.777 [2024-11-20 09:45:41.438576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.036 [2024-11-20 09:45:42.205873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.036 Malloc0 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.036 Malloc1 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:19.036 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.037 [2024-11-20 09:45:42.300220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.037 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:19.295 00:13:19.295 Discovery Log Number of Records 2, Generation counter 2 00:13:19.295 =====Discovery Log Entry 0====== 00:13:19.295 trtype: tcp 00:13:19.295 adrfam: ipv4 00:13:19.295 subtype: current discovery subsystem 00:13:19.295 treq: not required 00:13:19.295 portid: 0 00:13:19.295 trsvcid: 4420 00:13:19.295 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:19.295 traddr: 10.0.0.2 00:13:19.295 eflags: explicit discovery connections, duplicate discovery information 00:13:19.295 sectype: none 00:13:19.295 =====Discovery Log Entry 1====== 00:13:19.295 trtype: tcp 00:13:19.295 adrfam: ipv4 00:13:19.295 subtype: nvme subsystem 00:13:19.295 treq: not required 00:13:19.295 portid: 0 00:13:19.295 trsvcid: 4420 00:13:19.295 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:19.295 traddr: 10.0.0.2 00:13:19.295 eflags: none 00:13:19.295 sectype: none 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:19.295 09:45:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.673 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:20.673 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:20.673 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.673 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:20.673 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:20.673 09:45:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:22.578 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:22.578 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:22.578 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:22.579 /dev/nvme0n2 ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:22.579 rmmod nvme_tcp 00:13:22.579 rmmod nvme_fabrics 00:13:22.579 rmmod nvme_keyring 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2873182 ']' 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2873182 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2873182 ']' 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2873182 00:13:22.579 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:22.838 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.839 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2873182 00:13:22.839 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.839 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.839 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2873182' 00:13:22.839 killing process with pid 2873182 00:13:22.839 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2873182 00:13:22.839 09:45:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2873182 00:13:22.839 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:22.839 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:22.839 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:22.839 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:22.839 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:22.839 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:22.839 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:23.099 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.099 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.099 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.099 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.099 09:45:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.005 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:25.005 00:13:25.005 real 0m13.116s 00:13:25.005 user 0m20.721s 00:13:25.005 sys 0m5.040s 00:13:25.005 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.005 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.005 ************************************ 00:13:25.005 END TEST nvmf_nvme_cli 00:13:25.005 ************************************ 00:13:25.005 09:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:25.005 09:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:25.005 09:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:25.005 09:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.005 09:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:25.005 ************************************ 00:13:25.005 START TEST nvmf_vfio_user 00:13:25.005 ************************************ 00:13:25.005 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:25.265 * Looking for test storage... 00:13:25.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.265 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:13:25.265 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1703 -- # lcov --version 00:13:25.265 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:13:25.265 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:13:25.265 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.265 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:13:25.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.266 --rc genhtml_branch_coverage=1 00:13:25.266 --rc genhtml_function_coverage=1 00:13:25.266 --rc genhtml_legend=1 00:13:25.266 --rc geninfo_all_blocks=1 00:13:25.266 --rc geninfo_unexecuted_blocks=1 00:13:25.266 00:13:25.266 ' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:13:25.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.266 --rc genhtml_branch_coverage=1 00:13:25.266 --rc genhtml_function_coverage=1 00:13:25.266 --rc genhtml_legend=1 00:13:25.266 --rc geninfo_all_blocks=1 00:13:25.266 --rc geninfo_unexecuted_blocks=1 00:13:25.266 00:13:25.266 ' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:13:25.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.266 --rc genhtml_branch_coverage=1 00:13:25.266 --rc genhtml_function_coverage=1 00:13:25.266 --rc genhtml_legend=1 00:13:25.266 --rc geninfo_all_blocks=1 00:13:25.266 --rc geninfo_unexecuted_blocks=1 00:13:25.266 00:13:25.266 ' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:13:25.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.266 --rc genhtml_branch_coverage=1 00:13:25.266 --rc genhtml_function_coverage=1 00:13:25.266 --rc genhtml_legend=1 00:13:25.266 --rc geninfo_all_blocks=1 00:13:25.266 --rc geninfo_unexecuted_blocks=1 00:13:25.266 00:13:25.266 ' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:25.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:25.266 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2874471 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2874471' 00:13:25.267 Process pid: 2874471 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2874471 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2874471 ']' 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.267 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:25.267 [2024-11-20 09:45:48.572600] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:13:25.267 [2024-11-20 09:45:48.572651] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.526 [2024-11-20 09:45:48.645421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.526 [2024-11-20 09:45:48.688669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.526 [2024-11-20 09:45:48.688708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.526 [2024-11-20 09:45:48.688715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.526 [2024-11-20 09:45:48.688721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.526 [2024-11-20 09:45:48.688726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.526 [2024-11-20 09:45:48.690360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.526 [2024-11-20 09:45:48.690471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.526 [2024-11-20 09:45:48.690562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.526 [2024-11-20 09:45:48.690563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.526 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.526 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:25.526 09:45:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:26.466 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:26.726 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:26.726 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:26.726 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:26.726 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:26.726 09:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:26.986 Malloc1 00:13:26.986 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:27.245 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:27.503 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:27.762 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.763 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:27.763 09:45:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:27.763 Malloc2 00:13:27.763 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:28.021 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:28.280 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:28.541 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:28.541 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:28.541 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:28.541 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:28.541 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:28.541 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:28.541 [2024-11-20 09:45:51.672208] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:13:28.541 [2024-11-20 09:45:51.672232] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875001 ] 00:13:28.541 [2024-11-20 09:45:51.711979] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:28.541 [2024-11-20 09:45:51.725337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:28.541 [2024-11-20 09:45:51.725360] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7feed3ad3000 00:13:28.541 [2024-11-20 09:45:51.726330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.541 [2024-11-20 09:45:51.727334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.541 [2024-11-20 09:45:51.728336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.541 [2024-11-20 09:45:51.729335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:28.541 [2024-11-20 09:45:51.730347] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:28.541 [2024-11-20 09:45:51.731343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.541 [2024-11-20 09:45:51.732352] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:28.541 [2024-11-20 09:45:51.733348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:28.541 [2024-11-20 09:45:51.734362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:28.541 [2024-11-20 09:45:51.734376] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7feed3ac8000 00:13:28.541 [2024-11-20 09:45:51.735319] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:28.541 [2024-11-20 09:45:51.744921] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:28.541 [2024-11-20 09:45:51.744957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:28.541 [2024-11-20 09:45:51.750473] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:28.541 [2024-11-20 09:45:51.750511] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:28.541 [2024-11-20 09:45:51.750583] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:28.541 [2024-11-20 09:45:51.750599] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:28.541 [2024-11-20 09:45:51.750604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:28.541 [2024-11-20 09:45:51.751474] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:28.541 [2024-11-20 09:45:51.751483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:28.541 [2024-11-20 09:45:51.751490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:28.541 [2024-11-20 09:45:51.752481] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:28.541 [2024-11-20 09:45:51.752490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:28.541 [2024-11-20 09:45:51.752496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:28.541 [2024-11-20 09:45:51.753485] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:28.541 [2024-11-20 09:45:51.753494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:28.541 [2024-11-20 09:45:51.754488] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:28.541 [2024-11-20 09:45:51.754496] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:28.541 [2024-11-20 09:45:51.754501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:28.541 [2024-11-20 09:45:51.754506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:28.541 [2024-11-20 09:45:51.754614] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:28.541 [2024-11-20 09:45:51.754618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:28.542 [2024-11-20 09:45:51.754623] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:28.542 [2024-11-20 09:45:51.755497] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:28.542 [2024-11-20 09:45:51.756501] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:28.542 [2024-11-20 09:45:51.757510] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:28.542 [2024-11-20 09:45:51.758507] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:28.542 [2024-11-20 09:45:51.758570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:28.542 [2024-11-20 09:45:51.759517] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:28.542 [2024-11-20 09:45:51.759526] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:28.542 [2024-11-20 09:45:51.759531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759548] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:28.542 [2024-11-20 09:45:51.759555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759569] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:28.542 [2024-11-20 09:45:51.759573] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:28.542 [2024-11-20 09:45:51.759577] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:28.542 [2024-11-20 09:45:51.759590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.759636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.759646] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:28.542 [2024-11-20 09:45:51.759650] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:28.542 [2024-11-20 09:45:51.759654] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:28.542 [2024-11-20 09:45:51.759659] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:28.542 [2024-11-20 09:45:51.759665] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:28.542 [2024-11-20 09:45:51.759670] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:28.542 [2024-11-20 09:45:51.759674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.759701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.759711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.542 [2024-11-20 09:45:51.759720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.542 [2024-11-20 09:45:51.759728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.542 [2024-11-20 09:45:51.759735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.542 [2024-11-20 09:45:51.759739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.759769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.759776] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:28.542 [2024-11-20 09:45:51.759780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.759811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.759862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759876] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:28.542 [2024-11-20 09:45:51.759880] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:28.542 [2024-11-20 09:45:51.759883] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:28.542 [2024-11-20 09:45:51.759889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.759900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.759909] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:28.542 [2024-11-20 09:45:51.759917] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759930] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:28.542 [2024-11-20 09:45:51.759934] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:28.542 [2024-11-20 09:45:51.759938] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:28.542 [2024-11-20 09:45:51.759944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.759969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.759981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.759994] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:28.542 [2024-11-20 09:45:51.759998] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:28.542 [2024-11-20 09:45:51.760001] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:28.542 [2024-11-20 09:45:51.760007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.760017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.760024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.760030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.760037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.760042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.760047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.760052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.760056] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:28.542 [2024-11-20 09:45:51.760060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:28.542 [2024-11-20 09:45:51.760065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:28.542 [2024-11-20 09:45:51.760081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.760090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.760100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.760109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.760119] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.760129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.760141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:28.542 [2024-11-20 09:45:51.760150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:28.542 [2024-11-20 09:45:51.760161] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:28.543 [2024-11-20 09:45:51.760165] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:28.543 [2024-11-20 09:45:51.760168] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:28.543 [2024-11-20 09:45:51.760172] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:28.543 [2024-11-20 09:45:51.760175] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:28.543 [2024-11-20 09:45:51.760180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:28.543 [2024-11-20 09:45:51.760187] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:28.543 [2024-11-20 09:45:51.760191] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:28.543 [2024-11-20 09:45:51.760194] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:28.543 [2024-11-20 09:45:51.760199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:28.543 [2024-11-20 09:45:51.760205] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:28.543 [2024-11-20 09:45:51.760209] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:28.543 [2024-11-20 09:45:51.760212] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:28.543 [2024-11-20 09:45:51.760218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:28.543 [2024-11-20 09:45:51.760224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:28.543 [2024-11-20 09:45:51.760228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:28.543 [2024-11-20 09:45:51.760231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:28.543 [2024-11-20 09:45:51.760237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:28.543 [2024-11-20 09:45:51.760243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:28.543 [2024-11-20 09:45:51.760254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:28.543 [2024-11-20 09:45:51.760264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:28.543 [2024-11-20 09:45:51.760270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:28.543 ===================================================== 00:13:28.543 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:28.543 ===================================================== 00:13:28.543 Controller Capabilities/Features 00:13:28.543 ================================ 00:13:28.543 Vendor ID: 4e58 00:13:28.543 Subsystem Vendor ID: 4e58 00:13:28.543 Serial Number: SPDK1 00:13:28.543 Model Number: SPDK bdev Controller 00:13:28.543 Firmware Version: 25.01 00:13:28.543 Recommended Arb Burst: 6 00:13:28.543 IEEE OUI Identifier: 8d 6b 50 00:13:28.543 Multi-path I/O 00:13:28.543 May have multiple subsystem ports: Yes 00:13:28.543 May have multiple controllers: Yes 00:13:28.543 Associated with SR-IOV VF: No 00:13:28.543 Max Data Transfer Size: 131072 00:13:28.543 Max Number of Namespaces: 32 00:13:28.543 Max Number of I/O Queues: 127 00:13:28.543 NVMe Specification Version (VS): 1.3 00:13:28.543 NVMe Specification Version (Identify): 1.3 00:13:28.543 Maximum Queue Entries: 256 00:13:28.543 Contiguous Queues Required: Yes 00:13:28.543 Arbitration Mechanisms Supported 00:13:28.543 Weighted Round Robin: Not Supported 00:13:28.543 Vendor Specific: Not Supported 00:13:28.543 Reset Timeout: 15000 ms 00:13:28.543 Doorbell Stride: 4 bytes 00:13:28.543 NVM Subsystem Reset: Not Supported 00:13:28.543 Command Sets Supported 00:13:28.543 NVM Command Set: Supported 00:13:28.543 Boot Partition: Not Supported 00:13:28.543 Memory Page Size Minimum: 4096 bytes 00:13:28.543 Memory Page Size Maximum: 4096 bytes 00:13:28.543 Persistent Memory Region: Not Supported 00:13:28.543 Optional Asynchronous Events Supported 00:13:28.543 Namespace Attribute Notices: Supported 00:13:28.543 Firmware Activation Notices: Not Supported 00:13:28.543 ANA Change Notices: Not Supported 00:13:28.543 PLE Aggregate Log Change Notices: Not Supported 00:13:28.543 LBA Status Info Alert Notices: Not Supported 00:13:28.543 EGE Aggregate Log Change Notices: Not Supported 00:13:28.543 Normal NVM Subsystem Shutdown event: Not Supported 00:13:28.543 Zone Descriptor Change Notices: Not Supported 00:13:28.543 Discovery Log Change Notices: Not Supported 00:13:28.543 Controller Attributes 00:13:28.543 128-bit Host Identifier: Supported 00:13:28.543 Non-Operational Permissive Mode: Not Supported 00:13:28.543 NVM Sets: Not Supported 00:13:28.543 Read Recovery Levels: Not Supported 00:13:28.543 Endurance Groups: Not Supported 00:13:28.543 Predictable Latency Mode: Not Supported 00:13:28.543 Traffic Based Keep ALive: Not Supported 00:13:28.543 Namespace Granularity: Not Supported 00:13:28.543 SQ Associations: Not Supported 00:13:28.543 UUID List: Not Supported 00:13:28.543 Multi-Domain Subsystem: Not Supported 00:13:28.543 Fixed Capacity Management: Not Supported 00:13:28.543 Variable Capacity Management: Not Supported 00:13:28.543 Delete Endurance Group: Not Supported 00:13:28.543 Delete NVM Set: Not Supported 00:13:28.543 Extended LBA Formats Supported: Not Supported 00:13:28.543 Flexible Data Placement Supported: Not Supported 00:13:28.543 00:13:28.543 Controller Memory Buffer Support 00:13:28.543 ================================ 00:13:28.543 Supported: No 00:13:28.543 00:13:28.543 Persistent Memory Region Support 00:13:28.543 ================================ 00:13:28.543 Supported: No 00:13:28.543 00:13:28.543 Admin Command Set Attributes 00:13:28.543 ============================ 00:13:28.543 Security Send/Receive: Not Supported 00:13:28.543 Format NVM: Not Supported 00:13:28.543 Firmware Activate/Download: Not Supported 00:13:28.543 Namespace Management: Not Supported 00:13:28.543 Device Self-Test: Not Supported 00:13:28.543 Directives: Not Supported 00:13:28.543 NVMe-MI: Not Supported 00:13:28.543 Virtualization Management: Not Supported 00:13:28.543 Doorbell Buffer Config: Not Supported 00:13:28.543 Get LBA Status Capability: Not Supported 00:13:28.543 Command & Feature Lockdown Capability: Not Supported 00:13:28.543 Abort Command Limit: 4 00:13:28.543 Async Event Request Limit: 4 00:13:28.543 Number of Firmware Slots: N/A 00:13:28.543 Firmware Slot 1 Read-Only: N/A 00:13:28.543 Firmware Activation Without Reset: N/A 00:13:28.543 Multiple Update Detection Support: N/A 00:13:28.543 Firmware Update Granularity: No Information Provided 00:13:28.543 Per-Namespace SMART Log: No 00:13:28.543 Asymmetric Namespace Access Log Page: Not Supported 00:13:28.543 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:28.543 Command Effects Log Page: Supported 00:13:28.543 Get Log Page Extended Data: Supported 00:13:28.543 Telemetry Log Pages: Not Supported 00:13:28.543 Persistent Event Log Pages: Not Supported 00:13:28.543 Supported Log Pages Log Page: May Support 00:13:28.543 Commands Supported & Effects Log Page: Not Supported 00:13:28.543 Feature Identifiers & Effects Log Page:May Support 00:13:28.543 NVMe-MI Commands & Effects Log Page: May Support 00:13:28.543 Data Area 4 for Telemetry Log: Not Supported 00:13:28.543 Error Log Page Entries Supported: 128 00:13:28.543 Keep Alive: Supported 00:13:28.543 Keep Alive Granularity: 10000 ms 00:13:28.543 00:13:28.543 NVM Command Set Attributes 00:13:28.543 ========================== 00:13:28.543 Submission Queue Entry Size 00:13:28.543 Max: 64 00:13:28.543 Min: 64 00:13:28.543 Completion Queue Entry Size 00:13:28.543 Max: 16 00:13:28.543 Min: 16 00:13:28.543 Number of Namespaces: 32 00:13:28.543 Compare Command: Supported 00:13:28.543 Write Uncorrectable Command: Not Supported 00:13:28.543 Dataset Management Command: Supported 00:13:28.543 Write Zeroes Command: Supported 00:13:28.543 Set Features Save Field: Not Supported 00:13:28.543 Reservations: Not Supported 00:13:28.543 Timestamp: Not Supported 00:13:28.543 Copy: Supported 00:13:28.543 Volatile Write Cache: Present 00:13:28.543 Atomic Write Unit (Normal): 1 00:13:28.543 Atomic Write Unit (PFail): 1 00:13:28.543 Atomic Compare & Write Unit: 1 00:13:28.543 Fused Compare & Write: Supported 00:13:28.543 Scatter-Gather List 00:13:28.543 SGL Command Set: Supported (Dword aligned) 00:13:28.543 SGL Keyed: Not Supported 00:13:28.543 SGL Bit Bucket Descriptor: Not Supported 00:13:28.543 SGL Metadata Pointer: Not Supported 00:13:28.543 Oversized SGL: Not Supported 00:13:28.543 SGL Metadata Address: Not Supported 00:13:28.543 SGL Offset: Not Supported 00:13:28.543 Transport SGL Data Block: Not Supported 00:13:28.543 Replay Protected Memory Block: Not Supported 00:13:28.543 00:13:28.543 Firmware Slot Information 00:13:28.543 ========================= 00:13:28.543 Active slot: 1 00:13:28.543 Slot 1 Firmware Revision: 25.01 00:13:28.543 00:13:28.544 00:13:28.544 Commands Supported and Effects 00:13:28.544 ============================== 00:13:28.544 Admin Commands 00:13:28.544 -------------- 00:13:28.544 Get Log Page (02h): Supported 00:13:28.544 Identify (06h): Supported 00:13:28.544 Abort (08h): Supported 00:13:28.544 Set Features (09h): Supported 00:13:28.544 Get Features (0Ah): Supported 00:13:28.544 Asynchronous Event Request (0Ch): Supported 00:13:28.544 Keep Alive (18h): Supported 00:13:28.544 I/O Commands 00:13:28.544 ------------ 00:13:28.544 Flush (00h): Supported LBA-Change 00:13:28.544 Write (01h): Supported LBA-Change 00:13:28.544 Read (02h): Supported 00:13:28.544 Compare (05h): Supported 00:13:28.544 Write Zeroes (08h): Supported LBA-Change 00:13:28.544 Dataset Management (09h): Supported LBA-Change 00:13:28.544 Copy (19h): Supported LBA-Change 00:13:28.544 00:13:28.544 Error Log 00:13:28.544 ========= 00:13:28.544 00:13:28.544 Arbitration 00:13:28.544 =========== 00:13:28.544 Arbitration Burst: 1 00:13:28.544 00:13:28.544 Power Management 00:13:28.544 ================ 00:13:28.544 Number of Power States: 1 00:13:28.544 Current Power State: Power State #0 00:13:28.544 Power State #0: 00:13:28.544 Max Power: 0.00 W 00:13:28.544 Non-Operational State: Operational 00:13:28.544 Entry Latency: Not Reported 00:13:28.544 Exit Latency: Not Reported 00:13:28.544 Relative Read Throughput: 0 00:13:28.544 Relative Read Latency: 0 00:13:28.544 Relative Write Throughput: 0 00:13:28.544 Relative Write Latency: 0 00:13:28.544 Idle Power: Not Reported 00:13:28.544 Active Power: Not Reported 00:13:28.544 Non-Operational Permissive Mode: Not Supported 00:13:28.544 00:13:28.544 Health Information 00:13:28.544 ================== 00:13:28.544 Critical Warnings: 00:13:28.544 Available Spare Space: OK 00:13:28.544 Temperature: OK 00:13:28.544 Device Reliability: OK 00:13:28.544 Read Only: No 00:13:28.544 Volatile Memory Backup: OK 00:13:28.544 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:28.544 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:28.544 Available Spare: 0% 00:13:28.544 Available Sp[2024-11-20 09:45:51.760359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:28.544 [2024-11-20 09:45:51.760368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:28.544 [2024-11-20 09:45:51.760393] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:28.544 [2024-11-20 09:45:51.760402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.544 [2024-11-20 09:45:51.760408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.544 [2024-11-20 09:45:51.760415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.544 [2024-11-20 09:45:51.760421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.544 [2024-11-20 09:45:51.760523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:28.544 [2024-11-20 09:45:51.760532] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:28.544 [2024-11-20 09:45:51.761531] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:28.544 [2024-11-20 09:45:51.761581] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:28.544 [2024-11-20 09:45:51.761588] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:28.544 [2024-11-20 09:45:51.762531] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:28.544 [2024-11-20 09:45:51.762541] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:28.544 [2024-11-20 09:45:51.762592] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:28.544 [2024-11-20 09:45:51.764565] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:28.544 are Threshold: 0% 00:13:28.544 Life Percentage Used: 0% 00:13:28.544 Data Units Read: 0 00:13:28.544 Data Units Written: 0 00:13:28.544 Host Read Commands: 0 00:13:28.544 Host Write Commands: 0 00:13:28.544 Controller Busy Time: 0 minutes 00:13:28.544 Power Cycles: 0 00:13:28.544 Power On Hours: 0 hours 00:13:28.544 Unsafe Shutdowns: 0 00:13:28.544 Unrecoverable Media Errors: 0 00:13:28.544 Lifetime Error Log Entries: 0 00:13:28.544 Warning Temperature Time: 0 minutes 00:13:28.544 Critical Temperature Time: 0 minutes 00:13:28.544 00:13:28.544 Number of Queues 00:13:28.544 ================ 00:13:28.544 Number of I/O Submission Queues: 127 00:13:28.544 Number of I/O Completion Queues: 127 00:13:28.544 00:13:28.544 Active Namespaces 00:13:28.544 ================= 00:13:28.544 Namespace ID:1 00:13:28.544 Error Recovery Timeout: Unlimited 00:13:28.544 Command Set Identifier: NVM (00h) 00:13:28.544 Deallocate: Supported 00:13:28.544 Deallocated/Unwritten Error: Not Supported 00:13:28.544 Deallocated Read Value: Unknown 00:13:28.544 Deallocate in Write Zeroes: Not Supported 00:13:28.544 Deallocated Guard Field: 0xFFFF 00:13:28.544 Flush: Supported 00:13:28.544 Reservation: Supported 00:13:28.544 Namespace Sharing Capabilities: Multiple Controllers 00:13:28.544 Size (in LBAs): 131072 (0GiB) 00:13:28.544 Capacity (in LBAs): 131072 (0GiB) 00:13:28.544 Utilization (in LBAs): 131072 (0GiB) 00:13:28.544 NGUID: 37E3C29C232B471687C8F00A6803C0BD 00:13:28.544 UUID: 37e3c29c-232b-4716-87c8-f00a6803c0bd 00:13:28.544 Thin Provisioning: Not Supported 00:13:28.544 Per-NS Atomic Units: Yes 00:13:28.544 Atomic Boundary Size (Normal): 0 00:13:28.544 Atomic Boundary Size (PFail): 0 00:13:28.544 Atomic Boundary Offset: 0 00:13:28.544 Maximum Single Source Range Length: 65535 00:13:28.544 Maximum Copy Length: 65535 00:13:28.544 Maximum Source Range Count: 1 00:13:28.544 NGUID/EUI64 Never Reused: No 00:13:28.544 Namespace Write Protected: No 00:13:28.544 Number of LBA Formats: 1 00:13:28.544 Current LBA Format: LBA Format #00 00:13:28.544 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:28.544 00:13:28.544 09:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:28.804 [2024-11-20 09:45:52.002815] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:34.080 Initializing NVMe Controllers 00:13:34.080 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:34.080 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:34.080 Initialization complete. Launching workers. 00:13:34.080 ======================================================== 00:13:34.080 Latency(us) 00:13:34.080 Device Information : IOPS MiB/s Average min max 00:13:34.080 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39950.58 156.06 3203.79 971.57 9303.49 00:13:34.080 ======================================================== 00:13:34.080 Total : 39950.58 156.06 3203.79 971.57 9303.49 00:13:34.080 00:13:34.080 [2024-11-20 09:45:57.021046] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.080 09:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:34.080 [2024-11-20 09:45:57.261192] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:39.354 Initializing NVMe Controllers 00:13:39.354 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:39.354 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:39.354 Initialization complete. Launching workers. 00:13:39.354 ======================================================== 00:13:39.354 Latency(us) 00:13:39.354 Device Information : IOPS MiB/s Average min max 00:13:39.354 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.08 62.70 7979.87 6981.00 15322.84 00:13:39.354 ======================================================== 00:13:39.354 Total : 16051.08 62.70 7979.87 6981.00 15322.84 00:13:39.354 00:13:39.354 [2024-11-20 09:46:02.303648] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:39.354 09:46:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:39.354 [2024-11-20 09:46:02.516621] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.624 [2024-11-20 09:46:07.593253] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.624 Initializing NVMe Controllers 00:13:44.624 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:44.624 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:44.624 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:44.624 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:44.624 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:44.624 Initialization complete. Launching workers. 00:13:44.624 Starting thread on core 2 00:13:44.624 Starting thread on core 3 00:13:44.624 Starting thread on core 1 00:13:44.624 09:46:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:44.624 [2024-11-20 09:46:07.889249] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:47.931 [2024-11-20 09:46:10.945949] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:47.931 Initializing NVMe Controllers 00:13:47.931 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.931 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:47.931 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:47.931 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:47.931 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:47.931 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:47.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:47.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:47.931 Initialization complete. Launching workers. 00:13:47.931 Starting thread on core 1 with urgent priority queue 00:13:47.931 Starting thread on core 2 with urgent priority queue 00:13:47.931 Starting thread on core 3 with urgent priority queue 00:13:47.931 Starting thread on core 0 with urgent priority queue 00:13:47.931 SPDK bdev Controller (SPDK1 ) core 0: 9131.67 IO/s 10.95 secs/100000 ios 00:13:47.931 SPDK bdev Controller (SPDK1 ) core 1: 9040.33 IO/s 11.06 secs/100000 ios 00:13:47.931 SPDK bdev Controller (SPDK1 ) core 2: 8464.00 IO/s 11.81 secs/100000 ios 00:13:47.931 SPDK bdev Controller (SPDK1 ) core 3: 7626.67 IO/s 13.11 secs/100000 ios 00:13:47.931 ======================================================== 00:13:47.931 00:13:47.931 09:46:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:47.931 [2024-11-20 09:46:11.231070] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.190 Initializing NVMe Controllers 00:13:48.190 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.190 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.190 Namespace ID: 1 size: 0GB 00:13:48.190 Initialization complete. 00:13:48.190 INFO: using host memory buffer for IO 00:13:48.190 Hello world! 00:13:48.190 [2024-11-20 09:46:11.267302] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.190 09:46:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:48.449 [2024-11-20 09:46:11.554347] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:49.385 Initializing NVMe Controllers 00:13:49.385 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:49.385 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:49.385 Initialization complete. Launching workers. 00:13:49.385 submit (in ns) avg, min, max = 7437.1, 3304.3, 4005438.3 00:13:49.385 complete (in ns) avg, min, max = 21930.8, 1814.8, 4000576.5 00:13:49.385 00:13:49.385 Submit histogram 00:13:49.385 ================ 00:13:49.385 Range in us Cumulative Count 00:13:49.385 3.297 - 3.311: 0.0061% ( 1) 00:13:49.385 3.311 - 3.325: 0.0981% ( 15) 00:13:49.385 3.325 - 3.339: 0.2942% ( 32) 00:13:49.385 3.339 - 3.353: 0.7233% ( 70) 00:13:49.385 3.353 - 3.367: 1.7959% ( 175) 00:13:49.385 3.367 - 3.381: 4.6767% ( 470) 00:13:49.385 3.381 - 3.395: 9.8927% ( 851) 00:13:49.385 3.395 - 3.409: 15.9424% ( 987) 00:13:49.385 3.409 - 3.423: 22.6540% ( 1095) 00:13:49.385 3.423 - 3.437: 28.9979% ( 1035) 00:13:49.385 3.437 - 3.450: 34.6062% ( 915) 00:13:49.385 3.450 - 3.464: 39.8590% ( 857) 00:13:49.385 3.464 - 3.478: 44.4070% ( 742) 00:13:49.385 3.478 - 3.492: 48.6117% ( 686) 00:13:49.385 3.492 - 3.506: 53.6255% ( 818) 00:13:49.385 3.506 - 3.520: 59.8100% ( 1009) 00:13:49.385 3.520 - 3.534: 65.8413% ( 984) 00:13:49.385 3.534 - 3.548: 70.3586% ( 737) 00:13:49.385 3.548 - 3.562: 74.8146% ( 727) 00:13:49.385 3.562 - 3.590: 83.5366% ( 1423) 00:13:49.385 3.590 - 3.617: 87.1039% ( 582) 00:13:49.385 3.617 - 3.645: 87.9926% ( 145) 00:13:49.385 3.645 - 3.673: 88.9917% ( 163) 00:13:49.385 3.673 - 3.701: 90.6283% ( 267) 00:13:49.385 3.701 - 3.729: 92.1851% ( 254) 00:13:49.385 3.729 - 3.757: 93.6500% ( 239) 00:13:49.385 3.757 - 3.784: 95.5869% ( 316) 00:13:49.385 3.784 - 3.812: 97.1560% ( 256) 00:13:49.385 3.812 - 3.840: 98.1857% ( 168) 00:13:49.385 3.840 - 3.868: 98.8722% ( 112) 00:13:49.385 3.868 - 3.896: 99.2706% ( 65) 00:13:49.385 3.896 - 3.923: 99.4545% ( 30) 00:13:49.386 3.923 - 3.951: 99.5403% ( 14) 00:13:49.386 3.951 - 3.979: 99.5893% ( 8) 00:13:49.386 3.979 - 4.007: 99.6016% ( 2) 00:13:49.386 4.007 - 4.035: 99.6077% ( 1) 00:13:49.386 5.064 - 5.092: 99.6139% ( 1) 00:13:49.386 5.092 - 5.120: 99.6200% ( 1) 00:13:49.386 5.120 - 5.148: 99.6261% ( 1) 00:13:49.386 5.176 - 5.203: 99.6322% ( 1) 00:13:49.386 5.203 - 5.231: 99.6445% ( 2) 00:13:49.386 5.259 - 5.287: 99.6506% ( 1) 00:13:49.386 5.315 - 5.343: 99.6629% ( 2) 00:13:49.386 5.426 - 5.454: 99.6690% ( 1) 00:13:49.386 5.482 - 5.510: 99.6751% ( 1) 00:13:49.386 5.537 - 5.565: 99.6874% ( 2) 00:13:49.386 5.565 - 5.593: 99.6935% ( 1) 00:13:49.386 5.621 - 5.649: 99.6997% ( 1) 00:13:49.386 5.649 - 5.677: 99.7181% ( 3) 00:13:49.386 5.677 - 5.704: 99.7242% ( 1) 00:13:49.386 5.704 - 5.732: 99.7303% ( 1) 00:13:49.386 5.732 - 5.760: 99.7364% ( 1) 00:13:49.386 5.760 - 5.788: 99.7426% ( 1) 00:13:49.386 5.788 - 5.816: 99.7548% ( 2) 00:13:49.386 5.871 - 5.899: 99.7610% ( 1) 00:13:49.386 5.899 - 5.927: 99.7671% ( 1) 00:13:49.386 5.955 - 5.983: 99.7732% ( 1) 00:13:49.386 6.010 - 6.038: 99.7793% ( 1) 00:13:49.386 6.038 - 6.066: 99.7855% ( 1) 00:13:49.386 6.066 - 6.094: 99.7977% ( 2) 00:13:49.386 6.150 - 6.177: 99.8100% ( 2) 00:13:49.386 6.233 - 6.261: 99.8161% ( 1) 00:13:49.386 6.317 - 6.344: 99.8284% ( 2) 00:13:49.386 6.428 - 6.456: 99.8345% ( 1) 00:13:49.386 6.511 - 6.539: 99.8406% ( 1) 00:13:49.386 6.678 - 6.706: 99.8468% ( 1) 00:13:49.386 6.706 - 6.734: 99.8529% ( 1) 00:13:49.386 6.762 - 6.790: 99.8590% ( 1) 00:13:49.386 6.817 - 6.845: 99.8652% ( 1) 00:13:49.386 6.873 - 6.901: 99.8713% ( 1) 00:13:49.386 7.068 - 7.096: 99.8774% ( 1) 00:13:49.386 7.179 - 7.235: 99.8835% ( 1) 00:13:49.386 7.235 - 7.290: 99.8897% ( 1) 00:13:49.386 7.402 - 7.457: 99.8958% ( 1) 00:13:49.386 7.736 - 7.791: 99.9019% ( 1) 00:13:49.386 3989.148 - 4017.642: 100.0000% ( 16) 00:13:49.386 00:13:49.386 Complete histogram 00:13:49.386 ================== 00:13:49.386 Range in us Cumulative Count 00:13:49.386 1.809 - 1.823: 0.0613% ( 10) 00:13:49.386 1.823 - [2024-11-20 09:46:12.576271] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:49.386 1.837: 0.8581% ( 130) 00:13:49.386 1.837 - 1.850: 2.0349% ( 192) 00:13:49.386 1.850 - 1.864: 2.7214% ( 112) 00:13:49.386 1.864 - 1.878: 7.3368% ( 753) 00:13:49.386 1.878 - 1.892: 50.6589% ( 7068) 00:13:49.386 1.892 - 1.906: 84.3028% ( 5489) 00:13:49.386 1.906 - 1.920: 91.6273% ( 1195) 00:13:49.386 1.920 - 1.934: 93.5213% ( 309) 00:13:49.386 1.934 - 1.948: 94.2936% ( 126) 00:13:49.386 1.948 - 1.962: 96.3040% ( 328) 00:13:49.386 1.962 - 1.976: 98.4615% ( 352) 00:13:49.386 1.976 - 1.990: 99.2154% ( 123) 00:13:49.386 1.990 - 2.003: 99.3196% ( 17) 00:13:49.386 2.045 - 2.059: 99.3319% ( 2) 00:13:49.386 2.059 - 2.073: 99.3380% ( 1) 00:13:49.386 2.073 - 2.087: 99.3442% ( 1) 00:13:49.386 2.129 - 2.143: 99.3503% ( 1) 00:13:49.386 3.757 - 3.784: 99.3625% ( 2) 00:13:49.386 3.784 - 3.812: 99.3687% ( 1) 00:13:49.386 3.868 - 3.896: 99.3871% ( 3) 00:13:49.386 3.896 - 3.923: 99.3993% ( 2) 00:13:49.386 4.174 - 4.202: 99.4055% ( 1) 00:13:49.386 4.202 - 4.230: 99.4116% ( 1) 00:13:49.386 4.313 - 4.341: 99.4177% ( 1) 00:13:49.386 4.619 - 4.647: 99.4238% ( 1) 00:13:49.386 4.703 - 4.730: 99.4300% ( 1) 00:13:49.386 4.786 - 4.814: 99.4361% ( 1) 00:13:49.386 4.925 - 4.953: 99.4422% ( 1) 00:13:49.386 5.064 - 5.092: 99.4484% ( 1) 00:13:49.386 5.231 - 5.259: 99.4606% ( 2) 00:13:49.386 5.343 - 5.370: 99.4667% ( 1) 00:13:49.386 5.454 - 5.482: 99.4729% ( 1) 00:13:49.386 6.456 - 6.483: 99.4790% ( 1) 00:13:49.386 8.849 - 8.904: 99.4851% ( 1) 00:13:49.386 39.179 - 39.402: 99.4913% ( 1) 00:13:49.386 147.812 - 148.703: 99.4974% ( 1) 00:13:49.386 3148.577 - 3162.824: 99.5035% ( 1) 00:13:49.386 3989.148 - 4017.642: 100.0000% ( 81) 00:13:49.386 00:13:49.386 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:49.386 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:49.386 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:49.386 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:49.386 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:49.645 [ 00:13:49.645 { 00:13:49.645 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:49.645 "subtype": "Discovery", 00:13:49.645 "listen_addresses": [], 00:13:49.645 "allow_any_host": true, 00:13:49.645 "hosts": [] 00:13:49.645 }, 00:13:49.645 { 00:13:49.645 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:49.645 "subtype": "NVMe", 00:13:49.645 "listen_addresses": [ 00:13:49.645 { 00:13:49.645 "trtype": "VFIOUSER", 00:13:49.645 "adrfam": "IPv4", 00:13:49.645 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:49.645 "trsvcid": "0" 00:13:49.645 } 00:13:49.645 ], 00:13:49.645 "allow_any_host": true, 00:13:49.645 "hosts": [], 00:13:49.645 "serial_number": "SPDK1", 00:13:49.645 "model_number": "SPDK bdev Controller", 00:13:49.645 "max_namespaces": 32, 00:13:49.645 "min_cntlid": 1, 00:13:49.645 "max_cntlid": 65519, 00:13:49.645 "namespaces": [ 00:13:49.645 { 00:13:49.645 "nsid": 1, 00:13:49.645 "bdev_name": "Malloc1", 00:13:49.645 "name": "Malloc1", 00:13:49.645 "nguid": "37E3C29C232B471687C8F00A6803C0BD", 00:13:49.645 "uuid": "37e3c29c-232b-4716-87c8-f00a6803c0bd" 00:13:49.645 } 00:13:49.645 ] 00:13:49.645 }, 00:13:49.645 { 00:13:49.645 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:49.645 "subtype": "NVMe", 00:13:49.645 "listen_addresses": [ 00:13:49.645 { 00:13:49.645 "trtype": "VFIOUSER", 00:13:49.645 "adrfam": "IPv4", 00:13:49.645 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:49.645 "trsvcid": "0" 00:13:49.645 } 00:13:49.645 ], 00:13:49.645 "allow_any_host": true, 00:13:49.645 "hosts": [], 00:13:49.645 "serial_number": "SPDK2", 00:13:49.645 "model_number": "SPDK bdev Controller", 00:13:49.645 "max_namespaces": 32, 00:13:49.645 "min_cntlid": 1, 00:13:49.645 "max_cntlid": 65519, 00:13:49.645 "namespaces": [ 00:13:49.645 { 00:13:49.645 "nsid": 1, 00:13:49.645 "bdev_name": "Malloc2", 00:13:49.645 "name": "Malloc2", 00:13:49.645 "nguid": "3515EB6851EB457BB1CB7C7E9E5FE4B1", 00:13:49.645 "uuid": "3515eb68-51eb-457b-b1cb-7c7e9e5fe4b1" 00:13:49.645 } 00:13:49.645 ] 00:13:49.645 } 00:13:49.645 ] 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2878526 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:49.645 09:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:49.904 [2024-11-20 09:46:13.000342] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:49.904 Malloc3 00:13:49.904 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:50.163 [2024-11-20 09:46:13.242135] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.163 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:50.163 Asynchronous Event Request test 00:13:50.163 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.163 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.163 Registering asynchronous event callbacks... 00:13:50.163 Starting namespace attribute notice tests for all controllers... 00:13:50.163 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:50.163 aer_cb - Changed Namespace 00:13:50.163 Cleaning up... 00:13:50.163 [ 00:13:50.163 { 00:13:50.163 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:50.163 "subtype": "Discovery", 00:13:50.163 "listen_addresses": [], 00:13:50.163 "allow_any_host": true, 00:13:50.163 "hosts": [] 00:13:50.163 }, 00:13:50.163 { 00:13:50.163 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:50.163 "subtype": "NVMe", 00:13:50.163 "listen_addresses": [ 00:13:50.163 { 00:13:50.163 "trtype": "VFIOUSER", 00:13:50.163 "adrfam": "IPv4", 00:13:50.163 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:50.163 "trsvcid": "0" 00:13:50.164 } 00:13:50.164 ], 00:13:50.164 "allow_any_host": true, 00:13:50.164 "hosts": [], 00:13:50.164 "serial_number": "SPDK1", 00:13:50.164 "model_number": "SPDK bdev Controller", 00:13:50.164 "max_namespaces": 32, 00:13:50.164 "min_cntlid": 1, 00:13:50.164 "max_cntlid": 65519, 00:13:50.164 "namespaces": [ 00:13:50.164 { 00:13:50.164 "nsid": 1, 00:13:50.164 "bdev_name": "Malloc1", 00:13:50.164 "name": "Malloc1", 00:13:50.164 "nguid": "37E3C29C232B471687C8F00A6803C0BD", 00:13:50.164 "uuid": "37e3c29c-232b-4716-87c8-f00a6803c0bd" 00:13:50.164 }, 00:13:50.164 { 00:13:50.164 "nsid": 2, 00:13:50.164 "bdev_name": "Malloc3", 00:13:50.164 "name": "Malloc3", 00:13:50.164 "nguid": "CC42262F49BD445788E322EBEB8581CF", 00:13:50.164 "uuid": "cc42262f-49bd-4457-88e3-22ebeb8581cf" 00:13:50.164 } 00:13:50.164 ] 00:13:50.164 }, 00:13:50.164 { 00:13:50.164 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:50.164 "subtype": "NVMe", 00:13:50.164 "listen_addresses": [ 00:13:50.164 { 00:13:50.164 "trtype": "VFIOUSER", 00:13:50.164 "adrfam": "IPv4", 00:13:50.164 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:50.164 "trsvcid": "0" 00:13:50.164 } 00:13:50.164 ], 00:13:50.164 "allow_any_host": true, 00:13:50.164 "hosts": [], 00:13:50.164 "serial_number": "SPDK2", 00:13:50.164 "model_number": "SPDK bdev Controller", 00:13:50.164 "max_namespaces": 32, 00:13:50.164 "min_cntlid": 1, 00:13:50.164 "max_cntlid": 65519, 00:13:50.164 "namespaces": [ 00:13:50.164 { 00:13:50.164 "nsid": 1, 00:13:50.164 "bdev_name": "Malloc2", 00:13:50.164 "name": "Malloc2", 00:13:50.164 "nguid": "3515EB6851EB457BB1CB7C7E9E5FE4B1", 00:13:50.164 "uuid": "3515eb68-51eb-457b-b1cb-7c7e9e5fe4b1" 00:13:50.164 } 00:13:50.164 ] 00:13:50.164 } 00:13:50.164 ] 00:13:50.164 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2878526 00:13:50.164 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:50.164 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:50.164 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:50.164 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:50.164 [2024-11-20 09:46:13.476051] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:13:50.164 [2024-11-20 09:46:13.476089] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878631 ] 00:13:50.426 [2024-11-20 09:46:13.519704] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:50.426 [2024-11-20 09:46:13.524960] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:50.426 [2024-11-20 09:46:13.524983] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fce2efe8000 00:13:50.426 [2024-11-20 09:46:13.525963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.426 [2024-11-20 09:46:13.526980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.426 [2024-11-20 09:46:13.527983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.426 [2024-11-20 09:46:13.528990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:50.426 [2024-11-20 09:46:13.529993] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:50.426 [2024-11-20 09:46:13.530995] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.426 [2024-11-20 09:46:13.532001] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:50.426 [2024-11-20 09:46:13.533009] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:50.426 [2024-11-20 09:46:13.534014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:50.426 [2024-11-20 09:46:13.534025] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fce2efdd000 00:13:50.426 [2024-11-20 09:46:13.534965] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:50.426 [2024-11-20 09:46:13.544483] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:50.426 [2024-11-20 09:46:13.544507] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:50.426 [2024-11-20 09:46:13.549591] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:50.426 [2024-11-20 09:46:13.549631] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:50.426 [2024-11-20 09:46:13.549698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:50.426 [2024-11-20 09:46:13.549711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:50.426 [2024-11-20 09:46:13.549717] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:50.426 [2024-11-20 09:46:13.550593] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:50.426 [2024-11-20 09:46:13.550603] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:50.426 [2024-11-20 09:46:13.550610] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:50.426 [2024-11-20 09:46:13.551604] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:50.426 [2024-11-20 09:46:13.551614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:50.426 [2024-11-20 09:46:13.551623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:50.426 [2024-11-20 09:46:13.552617] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:50.426 [2024-11-20 09:46:13.552627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:50.426 [2024-11-20 09:46:13.553622] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:50.426 [2024-11-20 09:46:13.553632] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:50.426 [2024-11-20 09:46:13.553637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:50.426 [2024-11-20 09:46:13.553642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:50.426 [2024-11-20 09:46:13.553750] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:50.426 [2024-11-20 09:46:13.553755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:50.426 [2024-11-20 09:46:13.553760] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:50.426 [2024-11-20 09:46:13.554632] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:50.426 [2024-11-20 09:46:13.555646] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:50.426 [2024-11-20 09:46:13.556652] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:50.426 [2024-11-20 09:46:13.557651] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:50.426 [2024-11-20 09:46:13.557689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:50.426 [2024-11-20 09:46:13.558659] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:50.426 [2024-11-20 09:46:13.558668] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:50.426 [2024-11-20 09:46:13.558673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:50.426 [2024-11-20 09:46:13.558690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:50.426 [2024-11-20 09:46:13.558697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:50.426 [2024-11-20 09:46:13.558709] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:50.426 [2024-11-20 09:46:13.558713] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.426 [2024-11-20 09:46:13.558717] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.426 [2024-11-20 09:46:13.558729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.426 [2024-11-20 09:46:13.565956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:50.426 [2024-11-20 09:46:13.565969] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:50.426 [2024-11-20 09:46:13.565974] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:50.426 [2024-11-20 09:46:13.565978] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:50.426 [2024-11-20 09:46:13.565983] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:50.426 [2024-11-20 09:46:13.565991] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:50.426 [2024-11-20 09:46:13.565996] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:50.426 [2024-11-20 09:46:13.566001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:50.426 [2024-11-20 09:46:13.566009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:50.426 [2024-11-20 09:46:13.566019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:50.426 [2024-11-20 09:46:13.573954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:50.426 [2024-11-20 09:46:13.573966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.426 [2024-11-20 09:46:13.573974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.426 [2024-11-20 09:46:13.573981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.426 [2024-11-20 09:46:13.573989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.426 [2024-11-20 09:46:13.573994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:50.426 [2024-11-20 09:46:13.574000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:50.426 [2024-11-20 09:46:13.574009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.581951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.581962] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:50.427 [2024-11-20 09:46:13.581967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.581973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.581979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.581987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.589954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.590008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.590020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.590028] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:50.427 [2024-11-20 09:46:13.590032] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:50.427 [2024-11-20 09:46:13.590035] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.427 [2024-11-20 09:46:13.590041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.597953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.597963] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:50.427 [2024-11-20 09:46:13.597975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.597981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.597988] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:50.427 [2024-11-20 09:46:13.597992] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.427 [2024-11-20 09:46:13.597995] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.427 [2024-11-20 09:46:13.598001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.605953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.605966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.605974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.605981] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:50.427 [2024-11-20 09:46:13.605984] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.427 [2024-11-20 09:46:13.605987] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.427 [2024-11-20 09:46:13.605993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.613954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.613967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.613974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.613982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.613988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.613993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.614000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.614005] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:50.427 [2024-11-20 09:46:13.614009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:50.427 [2024-11-20 09:46:13.614014] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:50.427 [2024-11-20 09:46:13.614029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.621953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.621966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.629953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.629966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.637952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.637965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.645953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.645968] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:50.427 [2024-11-20 09:46:13.645973] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:50.427 [2024-11-20 09:46:13.645977] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:50.427 [2024-11-20 09:46:13.645980] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:50.427 [2024-11-20 09:46:13.645983] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:50.427 [2024-11-20 09:46:13.645989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:50.427 [2024-11-20 09:46:13.645996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:50.427 [2024-11-20 09:46:13.646000] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:50.427 [2024-11-20 09:46:13.646003] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.427 [2024-11-20 09:46:13.646008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.646015] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:50.427 [2024-11-20 09:46:13.646018] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:50.427 [2024-11-20 09:46:13.646021] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.427 [2024-11-20 09:46:13.646027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.646034] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:50.427 [2024-11-20 09:46:13.646039] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:50.427 [2024-11-20 09:46:13.646043] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:50.427 [2024-11-20 09:46:13.646048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:50.427 [2024-11-20 09:46:13.653954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.653967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.653977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:50.427 [2024-11-20 09:46:13.653983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:50.427 ===================================================== 00:13:50.427 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.427 ===================================================== 00:13:50.427 Controller Capabilities/Features 00:13:50.427 ================================ 00:13:50.427 Vendor ID: 4e58 00:13:50.427 Subsystem Vendor ID: 4e58 00:13:50.427 Serial Number: SPDK2 00:13:50.427 Model Number: SPDK bdev Controller 00:13:50.427 Firmware Version: 25.01 00:13:50.427 Recommended Arb Burst: 6 00:13:50.427 IEEE OUI Identifier: 8d 6b 50 00:13:50.427 Multi-path I/O 00:13:50.427 May have multiple subsystem ports: Yes 00:13:50.427 May have multiple controllers: Yes 00:13:50.427 Associated with SR-IOV VF: No 00:13:50.427 Max Data Transfer Size: 131072 00:13:50.427 Max Number of Namespaces: 32 00:13:50.427 Max Number of I/O Queues: 127 00:13:50.427 NVMe Specification Version (VS): 1.3 00:13:50.427 NVMe Specification Version (Identify): 1.3 00:13:50.427 Maximum Queue Entries: 256 00:13:50.427 Contiguous Queues Required: Yes 00:13:50.427 Arbitration Mechanisms Supported 00:13:50.427 Weighted Round Robin: Not Supported 00:13:50.427 Vendor Specific: Not Supported 00:13:50.427 Reset Timeout: 15000 ms 00:13:50.427 Doorbell Stride: 4 bytes 00:13:50.427 NVM Subsystem Reset: Not Supported 00:13:50.427 Command Sets Supported 00:13:50.427 NVM Command Set: Supported 00:13:50.427 Boot Partition: Not Supported 00:13:50.427 Memory Page Size Minimum: 4096 bytes 00:13:50.428 Memory Page Size Maximum: 4096 bytes 00:13:50.428 Persistent Memory Region: Not Supported 00:13:50.428 Optional Asynchronous Events Supported 00:13:50.428 Namespace Attribute Notices: Supported 00:13:50.428 Firmware Activation Notices: Not Supported 00:13:50.428 ANA Change Notices: Not Supported 00:13:50.428 PLE Aggregate Log Change Notices: Not Supported 00:13:50.428 LBA Status Info Alert Notices: Not Supported 00:13:50.428 EGE Aggregate Log Change Notices: Not Supported 00:13:50.428 Normal NVM Subsystem Shutdown event: Not Supported 00:13:50.428 Zone Descriptor Change Notices: Not Supported 00:13:50.428 Discovery Log Change Notices: Not Supported 00:13:50.428 Controller Attributes 00:13:50.428 128-bit Host Identifier: Supported 00:13:50.428 Non-Operational Permissive Mode: Not Supported 00:13:50.428 NVM Sets: Not Supported 00:13:50.428 Read Recovery Levels: Not Supported 00:13:50.428 Endurance Groups: Not Supported 00:13:50.428 Predictable Latency Mode: Not Supported 00:13:50.428 Traffic Based Keep ALive: Not Supported 00:13:50.428 Namespace Granularity: Not Supported 00:13:50.428 SQ Associations: Not Supported 00:13:50.428 UUID List: Not Supported 00:13:50.428 Multi-Domain Subsystem: Not Supported 00:13:50.428 Fixed Capacity Management: Not Supported 00:13:50.428 Variable Capacity Management: Not Supported 00:13:50.428 Delete Endurance Group: Not Supported 00:13:50.428 Delete NVM Set: Not Supported 00:13:50.428 Extended LBA Formats Supported: Not Supported 00:13:50.428 Flexible Data Placement Supported: Not Supported 00:13:50.428 00:13:50.428 Controller Memory Buffer Support 00:13:50.428 ================================ 00:13:50.428 Supported: No 00:13:50.428 00:13:50.428 Persistent Memory Region Support 00:13:50.428 ================================ 00:13:50.428 Supported: No 00:13:50.428 00:13:50.428 Admin Command Set Attributes 00:13:50.428 ============================ 00:13:50.428 Security Send/Receive: Not Supported 00:13:50.428 Format NVM: Not Supported 00:13:50.428 Firmware Activate/Download: Not Supported 00:13:50.428 Namespace Management: Not Supported 00:13:50.428 Device Self-Test: Not Supported 00:13:50.428 Directives: Not Supported 00:13:50.428 NVMe-MI: Not Supported 00:13:50.428 Virtualization Management: Not Supported 00:13:50.428 Doorbell Buffer Config: Not Supported 00:13:50.428 Get LBA Status Capability: Not Supported 00:13:50.428 Command & Feature Lockdown Capability: Not Supported 00:13:50.428 Abort Command Limit: 4 00:13:50.428 Async Event Request Limit: 4 00:13:50.428 Number of Firmware Slots: N/A 00:13:50.428 Firmware Slot 1 Read-Only: N/A 00:13:50.428 Firmware Activation Without Reset: N/A 00:13:50.428 Multiple Update Detection Support: N/A 00:13:50.428 Firmware Update Granularity: No Information Provided 00:13:50.428 Per-Namespace SMART Log: No 00:13:50.428 Asymmetric Namespace Access Log Page: Not Supported 00:13:50.428 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:50.428 Command Effects Log Page: Supported 00:13:50.428 Get Log Page Extended Data: Supported 00:13:50.428 Telemetry Log Pages: Not Supported 00:13:50.428 Persistent Event Log Pages: Not Supported 00:13:50.428 Supported Log Pages Log Page: May Support 00:13:50.428 Commands Supported & Effects Log Page: Not Supported 00:13:50.428 Feature Identifiers & Effects Log Page:May Support 00:13:50.428 NVMe-MI Commands & Effects Log Page: May Support 00:13:50.428 Data Area 4 for Telemetry Log: Not Supported 00:13:50.428 Error Log Page Entries Supported: 128 00:13:50.428 Keep Alive: Supported 00:13:50.428 Keep Alive Granularity: 10000 ms 00:13:50.428 00:13:50.428 NVM Command Set Attributes 00:13:50.428 ========================== 00:13:50.428 Submission Queue Entry Size 00:13:50.428 Max: 64 00:13:50.428 Min: 64 00:13:50.428 Completion Queue Entry Size 00:13:50.428 Max: 16 00:13:50.428 Min: 16 00:13:50.428 Number of Namespaces: 32 00:13:50.428 Compare Command: Supported 00:13:50.428 Write Uncorrectable Command: Not Supported 00:13:50.428 Dataset Management Command: Supported 00:13:50.428 Write Zeroes Command: Supported 00:13:50.428 Set Features Save Field: Not Supported 00:13:50.428 Reservations: Not Supported 00:13:50.428 Timestamp: Not Supported 00:13:50.428 Copy: Supported 00:13:50.428 Volatile Write Cache: Present 00:13:50.428 Atomic Write Unit (Normal): 1 00:13:50.428 Atomic Write Unit (PFail): 1 00:13:50.428 Atomic Compare & Write Unit: 1 00:13:50.428 Fused Compare & Write: Supported 00:13:50.428 Scatter-Gather List 00:13:50.428 SGL Command Set: Supported (Dword aligned) 00:13:50.428 SGL Keyed: Not Supported 00:13:50.428 SGL Bit Bucket Descriptor: Not Supported 00:13:50.428 SGL Metadata Pointer: Not Supported 00:13:50.428 Oversized SGL: Not Supported 00:13:50.428 SGL Metadata Address: Not Supported 00:13:50.428 SGL Offset: Not Supported 00:13:50.428 Transport SGL Data Block: Not Supported 00:13:50.428 Replay Protected Memory Block: Not Supported 00:13:50.428 00:13:50.428 Firmware Slot Information 00:13:50.428 ========================= 00:13:50.428 Active slot: 1 00:13:50.428 Slot 1 Firmware Revision: 25.01 00:13:50.428 00:13:50.428 00:13:50.428 Commands Supported and Effects 00:13:50.428 ============================== 00:13:50.428 Admin Commands 00:13:50.428 -------------- 00:13:50.428 Get Log Page (02h): Supported 00:13:50.428 Identify (06h): Supported 00:13:50.428 Abort (08h): Supported 00:13:50.428 Set Features (09h): Supported 00:13:50.428 Get Features (0Ah): Supported 00:13:50.428 Asynchronous Event Request (0Ch): Supported 00:13:50.428 Keep Alive (18h): Supported 00:13:50.428 I/O Commands 00:13:50.428 ------------ 00:13:50.428 Flush (00h): Supported LBA-Change 00:13:50.428 Write (01h): Supported LBA-Change 00:13:50.428 Read (02h): Supported 00:13:50.428 Compare (05h): Supported 00:13:50.428 Write Zeroes (08h): Supported LBA-Change 00:13:50.428 Dataset Management (09h): Supported LBA-Change 00:13:50.428 Copy (19h): Supported LBA-Change 00:13:50.428 00:13:50.428 Error Log 00:13:50.428 ========= 00:13:50.428 00:13:50.428 Arbitration 00:13:50.428 =========== 00:13:50.428 Arbitration Burst: 1 00:13:50.428 00:13:50.428 Power Management 00:13:50.428 ================ 00:13:50.428 Number of Power States: 1 00:13:50.428 Current Power State: Power State #0 00:13:50.428 Power State #0: 00:13:50.428 Max Power: 0.00 W 00:13:50.428 Non-Operational State: Operational 00:13:50.428 Entry Latency: Not Reported 00:13:50.428 Exit Latency: Not Reported 00:13:50.428 Relative Read Throughput: 0 00:13:50.428 Relative Read Latency: 0 00:13:50.428 Relative Write Throughput: 0 00:13:50.428 Relative Write Latency: 0 00:13:50.428 Idle Power: Not Reported 00:13:50.428 Active Power: Not Reported 00:13:50.428 Non-Operational Permissive Mode: Not Supported 00:13:50.428 00:13:50.428 Health Information 00:13:50.428 ================== 00:13:50.428 Critical Warnings: 00:13:50.428 Available Spare Space: OK 00:13:50.428 Temperature: OK 00:13:50.428 Device Reliability: OK 00:13:50.428 Read Only: No 00:13:50.428 Volatile Memory Backup: OK 00:13:50.428 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:50.428 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:50.428 Available Spare: 0% 00:13:50.428 Available Sp[2024-11-20 09:46:13.654077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:50.428 [2024-11-20 09:46:13.661953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:50.428 [2024-11-20 09:46:13.661980] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:50.428 [2024-11-20 09:46:13.661989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.428 [2024-11-20 09:46:13.661995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.428 [2024-11-20 09:46:13.662000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.428 [2024-11-20 09:46:13.662006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.428 [2024-11-20 09:46:13.662058] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:50.428 [2024-11-20 09:46:13.662069] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:50.428 [2024-11-20 09:46:13.663058] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:50.428 [2024-11-20 09:46:13.663103] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:50.428 [2024-11-20 09:46:13.663109] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:50.428 [2024-11-20 09:46:13.664062] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:50.429 [2024-11-20 09:46:13.664073] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:50.429 [2024-11-20 09:46:13.664118] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:50.429 [2024-11-20 09:46:13.665096] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:50.429 are Threshold: 0% 00:13:50.429 Life Percentage Used: 0% 00:13:50.429 Data Units Read: 0 00:13:50.429 Data Units Written: 0 00:13:50.429 Host Read Commands: 0 00:13:50.429 Host Write Commands: 0 00:13:50.429 Controller Busy Time: 0 minutes 00:13:50.429 Power Cycles: 0 00:13:50.429 Power On Hours: 0 hours 00:13:50.429 Unsafe Shutdowns: 0 00:13:50.429 Unrecoverable Media Errors: 0 00:13:50.429 Lifetime Error Log Entries: 0 00:13:50.429 Warning Temperature Time: 0 minutes 00:13:50.429 Critical Temperature Time: 0 minutes 00:13:50.429 00:13:50.429 Number of Queues 00:13:50.429 ================ 00:13:50.429 Number of I/O Submission Queues: 127 00:13:50.429 Number of I/O Completion Queues: 127 00:13:50.429 00:13:50.429 Active Namespaces 00:13:50.429 ================= 00:13:50.429 Namespace ID:1 00:13:50.429 Error Recovery Timeout: Unlimited 00:13:50.429 Command Set Identifier: NVM (00h) 00:13:50.429 Deallocate: Supported 00:13:50.429 Deallocated/Unwritten Error: Not Supported 00:13:50.429 Deallocated Read Value: Unknown 00:13:50.429 Deallocate in Write Zeroes: Not Supported 00:13:50.429 Deallocated Guard Field: 0xFFFF 00:13:50.429 Flush: Supported 00:13:50.429 Reservation: Supported 00:13:50.429 Namespace Sharing Capabilities: Multiple Controllers 00:13:50.429 Size (in LBAs): 131072 (0GiB) 00:13:50.429 Capacity (in LBAs): 131072 (0GiB) 00:13:50.429 Utilization (in LBAs): 131072 (0GiB) 00:13:50.429 NGUID: 3515EB6851EB457BB1CB7C7E9E5FE4B1 00:13:50.429 UUID: 3515eb68-51eb-457b-b1cb-7c7e9e5fe4b1 00:13:50.429 Thin Provisioning: Not Supported 00:13:50.429 Per-NS Atomic Units: Yes 00:13:50.429 Atomic Boundary Size (Normal): 0 00:13:50.429 Atomic Boundary Size (PFail): 0 00:13:50.429 Atomic Boundary Offset: 0 00:13:50.429 Maximum Single Source Range Length: 65535 00:13:50.429 Maximum Copy Length: 65535 00:13:50.429 Maximum Source Range Count: 1 00:13:50.429 NGUID/EUI64 Never Reused: No 00:13:50.429 Namespace Write Protected: No 00:13:50.429 Number of LBA Formats: 1 00:13:50.429 Current LBA Format: LBA Format #00 00:13:50.429 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:50.429 00:13:50.429 09:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:50.688 [2024-11-20 09:46:13.901344] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.961 Initializing NVMe Controllers 00:13:55.961 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:55.961 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:55.961 Initialization complete. Launching workers. 00:13:55.961 ======================================================== 00:13:55.961 Latency(us) 00:13:55.961 Device Information : IOPS MiB/s Average min max 00:13:55.961 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39929.14 155.97 3205.50 963.91 7485.71 00:13:55.961 ======================================================== 00:13:55.961 Total : 39929.14 155.97 3205.50 963.91 7485.71 00:13:55.961 00:13:55.961 [2024-11-20 09:46:19.005201] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.961 09:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:55.961 [2024-11-20 09:46:19.254920] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:01.229 Initializing NVMe Controllers 00:14:01.229 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:01.229 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:01.229 Initialization complete. Launching workers. 00:14:01.229 ======================================================== 00:14:01.229 Latency(us) 00:14:01.229 Device Information : IOPS MiB/s Average min max 00:14:01.229 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39932.58 155.99 3205.47 987.56 9579.65 00:14:01.229 ======================================================== 00:14:01.229 Total : 39932.58 155.99 3205.47 987.56 9579.65 00:14:01.229 00:14:01.229 [2024-11-20 09:46:24.272182] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:01.229 09:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:01.229 [2024-11-20 09:46:24.486503] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.502 [2024-11-20 09:46:29.632052] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.502 Initializing NVMe Controllers 00:14:06.502 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:06.502 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:06.502 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:06.502 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:06.502 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:06.502 Initialization complete. Launching workers. 00:14:06.502 Starting thread on core 2 00:14:06.502 Starting thread on core 3 00:14:06.502 Starting thread on core 1 00:14:06.502 09:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:06.805 [2024-11-20 09:46:29.930341] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.196 [2024-11-20 09:46:33.004410] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.196 Initializing NVMe Controllers 00:14:10.196 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.196 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.196 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:10.196 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:10.196 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:10.196 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:10.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:10.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:10.196 Initialization complete. Launching workers. 00:14:10.196 Starting thread on core 1 with urgent priority queue 00:14:10.196 Starting thread on core 2 with urgent priority queue 00:14:10.196 Starting thread on core 3 with urgent priority queue 00:14:10.196 Starting thread on core 0 with urgent priority queue 00:14:10.196 SPDK bdev Controller (SPDK2 ) core 0: 10367.67 IO/s 9.65 secs/100000 ios 00:14:10.196 SPDK bdev Controller (SPDK2 ) core 1: 7842.00 IO/s 12.75 secs/100000 ios 00:14:10.196 SPDK bdev Controller (SPDK2 ) core 2: 7487.67 IO/s 13.36 secs/100000 ios 00:14:10.196 SPDK bdev Controller (SPDK2 ) core 3: 7473.33 IO/s 13.38 secs/100000 ios 00:14:10.196 ======================================================== 00:14:10.196 00:14:10.196 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:10.196 [2024-11-20 09:46:33.292858] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.196 Initializing NVMe Controllers 00:14:10.196 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.196 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.196 Namespace ID: 1 size: 0GB 00:14:10.196 Initialization complete. 00:14:10.196 INFO: using host memory buffer for IO 00:14:10.196 Hello world! 00:14:10.196 [2024-11-20 09:46:33.303934] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.196 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:10.454 [2024-11-20 09:46:33.584801] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:11.390 Initializing NVMe Controllers 00:14:11.390 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:11.390 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:11.390 Initialization complete. Launching workers. 00:14:11.390 submit (in ns) avg, min, max = 8228.0, 3265.2, 4000227.8 00:14:11.390 complete (in ns) avg, min, max = 19059.0, 1790.4, 4995185.2 00:14:11.390 00:14:11.390 Submit histogram 00:14:11.390 ================ 00:14:11.390 Range in us Cumulative Count 00:14:11.390 3.256 - 3.270: 0.0125% ( 2) 00:14:11.390 3.270 - 3.283: 0.0250% ( 2) 00:14:11.390 3.283 - 3.297: 0.1061% ( 13) 00:14:11.390 3.297 - 3.311: 0.2870% ( 29) 00:14:11.390 3.311 - 3.325: 0.6488% ( 58) 00:14:11.390 3.325 - 3.339: 2.1397% ( 239) 00:14:11.390 3.339 - 3.353: 6.3256% ( 671) 00:14:11.390 3.353 - 3.367: 12.2957% ( 957) 00:14:11.390 3.367 - 3.381: 18.8022% ( 1043) 00:14:11.390 3.381 - 3.395: 25.0593% ( 1003) 00:14:11.390 3.395 - 3.409: 30.9981% ( 952) 00:14:11.390 3.409 - 3.423: 35.7580% ( 763) 00:14:11.390 3.423 - 3.437: 40.7923% ( 807) 00:14:11.390 3.437 - 3.450: 46.0387% ( 841) 00:14:11.390 3.450 - 3.464: 50.4679% ( 710) 00:14:11.390 3.464 - 3.478: 54.2233% ( 602) 00:14:11.390 3.478 - 3.492: 59.2389% ( 804) 00:14:11.390 3.492 - 3.506: 66.1946% ( 1115) 00:14:11.390 3.506 - 3.520: 71.1104% ( 788) 00:14:11.390 3.520 - 3.534: 75.8266% ( 756) 00:14:11.390 3.534 - 3.548: 80.7548% ( 790) 00:14:11.390 3.548 - 3.562: 83.9613% ( 514) 00:14:11.390 3.562 - 3.590: 87.0243% ( 491) 00:14:11.390 3.590 - 3.617: 87.7105% ( 110) 00:14:11.390 3.617 - 3.645: 88.4467% ( 118) 00:14:11.390 3.645 - 3.673: 90.0561% ( 258) 00:14:11.390 3.673 - 3.701: 91.8528% ( 288) 00:14:11.390 3.701 - 3.729: 93.5122% ( 266) 00:14:11.390 3.729 - 3.757: 95.2339% ( 276) 00:14:11.390 3.757 - 3.784: 96.7935% ( 250) 00:14:11.390 3.784 - 3.812: 97.9476% ( 185) 00:14:11.390 3.812 - 3.840: 98.7024% ( 121) 00:14:11.390 3.840 - 3.868: 99.2202% ( 83) 00:14:11.390 3.868 - 3.896: 99.4386% ( 35) 00:14:11.390 3.896 - 3.923: 99.5446% ( 17) 00:14:11.390 3.923 - 3.951: 99.5696% ( 4) 00:14:11.390 3.979 - 4.007: 99.5758% ( 1) 00:14:11.390 5.259 - 5.287: 99.5820% ( 1) 00:14:11.390 5.398 - 5.426: 99.5883% ( 1) 00:14:11.391 5.565 - 5.593: 99.6007% ( 2) 00:14:11.391 5.732 - 5.760: 99.6070% ( 1) 00:14:11.391 5.760 - 5.788: 99.6132% ( 1) 00:14:11.391 5.788 - 5.816: 99.6195% ( 1) 00:14:11.391 5.843 - 5.871: 99.6319% ( 2) 00:14:11.391 5.955 - 5.983: 99.6382% ( 1) 00:14:11.391 6.066 - 6.094: 99.6507% ( 2) 00:14:11.391 6.094 - 6.122: 99.6569% ( 1) 00:14:11.391 6.150 - 6.177: 99.6631% ( 1) 00:14:11.391 6.177 - 6.205: 99.6694% ( 1) 00:14:11.391 6.317 - 6.344: 99.6756% ( 1) 00:14:11.391 6.456 - 6.483: 99.6881% ( 2) 00:14:11.391 6.483 - 6.511: 99.6943% ( 1) 00:14:11.391 6.567 - 6.595: 99.7068% ( 2) 00:14:11.391 6.706 - 6.734: 99.7130% ( 1) 00:14:11.391 6.762 - 6.790: 99.7255% ( 2) 00:14:11.391 6.790 - 6.817: 99.7380% ( 2) 00:14:11.391 6.845 - 6.873: 99.7442% ( 1) 00:14:11.391 6.901 - 6.929: 99.7505% ( 1) 00:14:11.391 7.012 - 7.040: 99.7567% ( 1) 00:14:11.391 7.123 - 7.179: 99.7692% ( 2) 00:14:11.391 7.179 - 7.235: 99.7817% ( 2) 00:14:11.391 7.235 - 7.290: 99.7941% ( 2) 00:14:11.391 7.457 - 7.513: 99.8066% ( 2) 00:14:11.391 7.624 - 7.680: 99.8129% ( 1) 00:14:11.391 7.791 - 7.847: 99.8191% ( 1) 00:14:11.391 7.847 - 7.903: 99.8253% ( 1) 00:14:11.391 7.958 - 8.014: 99.8316% ( 1) 00:14:11.391 8.070 - 8.125: 99.8378% ( 1) 00:14:11.391 8.237 - 8.292: 99.8503% ( 2) 00:14:11.391 8.292 - 8.348: 99.8565% ( 1) 00:14:11.391 8.348 - 8.403: 99.8628% ( 1) 00:14:11.391 8.459 - 8.515: 99.8690% ( 1) 00:14:11.391 12.466 - 12.522: 99.8752% ( 1) 00:14:11.391 19.367 - 19.478: 99.8815% ( 1) 00:14:11.391 3989.148 - 4017.642: 100.0000% ( 19) 00:14:11.391 00:14:11.391 Complete histogram 00:14:11.391 ================== 00:14:11.391 Range in us Cumulative Count 00:14:11.391 1.781 - 1.795: 0.0062% ( 1) 00:14:11.391 1.809 - 1.823: 0.1061% ( 16) 00:14:11.391 1.823 - [2024-11-20 09:46:34.677016] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:11.391 1.837: 0.8546% ( 120) 00:14:11.391 1.837 - 1.850: 2.3082% ( 233) 00:14:11.391 1.850 - 1.864: 3.1940% ( 142) 00:14:11.391 1.864 - 1.878: 22.4329% ( 3084) 00:14:11.391 1.878 - 1.892: 71.7654% ( 7908) 00:14:11.391 1.892 - 1.906: 86.7436% ( 2401) 00:14:11.391 1.906 - 1.920: 92.6326% ( 944) 00:14:11.391 1.920 - 1.934: 95.8203% ( 511) 00:14:11.391 1.934 - 1.948: 97.0680% ( 200) 00:14:11.391 1.948 - 1.962: 98.2346% ( 187) 00:14:11.391 1.962 - 1.976: 98.9457% ( 114) 00:14:11.391 1.976 - 1.990: 99.2576% ( 50) 00:14:11.391 1.990 - 2.003: 99.3325% ( 12) 00:14:11.391 2.003 - 2.017: 99.3450% ( 2) 00:14:11.391 2.017 - 2.031: 99.3699% ( 4) 00:14:11.391 2.031 - 2.045: 99.3949% ( 4) 00:14:11.391 2.045 - 2.059: 99.4074% ( 2) 00:14:11.391 2.087 - 2.101: 99.4136% ( 1) 00:14:11.391 2.157 - 2.170: 99.4198% ( 1) 00:14:11.391 3.562 - 3.590: 99.4261% ( 1) 00:14:11.391 3.784 - 3.812: 99.4323% ( 1) 00:14:11.391 3.840 - 3.868: 99.4386% ( 1) 00:14:11.391 4.007 - 4.035: 99.4448% ( 1) 00:14:11.391 4.508 - 4.536: 99.4510% ( 1) 00:14:11.391 4.675 - 4.703: 99.4573% ( 1) 00:14:11.391 4.786 - 4.814: 99.4635% ( 1) 00:14:11.391 4.842 - 4.870: 99.4760% ( 2) 00:14:11.391 5.092 - 5.120: 99.4822% ( 1) 00:14:11.391 5.176 - 5.203: 99.4947% ( 2) 00:14:11.391 5.343 - 5.370: 99.5009% ( 1) 00:14:11.391 5.370 - 5.398: 99.5072% ( 1) 00:14:11.391 5.537 - 5.565: 99.5134% ( 1) 00:14:11.391 5.732 - 5.760: 99.5197% ( 1) 00:14:11.391 5.760 - 5.788: 99.5259% ( 1) 00:14:11.391 6.038 - 6.066: 99.5321% ( 1) 00:14:11.391 6.066 - 6.094: 99.5384% ( 1) 00:14:11.391 6.344 - 6.372: 99.5446% ( 1) 00:14:11.391 8.181 - 8.237: 99.5508% ( 1) 00:14:11.391 9.405 - 9.461: 99.5571% ( 1) 00:14:11.391 9.906 - 9.962: 99.5633% ( 1) 00:14:11.391 12.188 - 12.243: 99.5696% ( 1) 00:14:11.391 3020.355 - 3034.602: 99.5820% ( 2) 00:14:11.391 3034.602 - 3048.849: 99.5883% ( 1) 00:14:11.391 3989.148 - 4017.642: 99.9750% ( 62) 00:14:11.391 4017.642 - 4046.136: 99.9813% ( 1) 00:14:11.391 4388.063 - 4416.557: 99.9875% ( 1) 00:14:11.391 4986.435 - 5014.929: 100.0000% ( 2) 00:14:11.391 00:14:11.391 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:11.391 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:11.391 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:11.391 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:11.391 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:11.650 [ 00:14:11.650 { 00:14:11.650 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:11.650 "subtype": "Discovery", 00:14:11.650 "listen_addresses": [], 00:14:11.650 "allow_any_host": true, 00:14:11.650 "hosts": [] 00:14:11.650 }, 00:14:11.650 { 00:14:11.650 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:11.650 "subtype": "NVMe", 00:14:11.650 "listen_addresses": [ 00:14:11.650 { 00:14:11.650 "trtype": "VFIOUSER", 00:14:11.650 "adrfam": "IPv4", 00:14:11.650 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:11.650 "trsvcid": "0" 00:14:11.650 } 00:14:11.650 ], 00:14:11.650 "allow_any_host": true, 00:14:11.650 "hosts": [], 00:14:11.650 "serial_number": "SPDK1", 00:14:11.650 "model_number": "SPDK bdev Controller", 00:14:11.650 "max_namespaces": 32, 00:14:11.650 "min_cntlid": 1, 00:14:11.650 "max_cntlid": 65519, 00:14:11.650 "namespaces": [ 00:14:11.650 { 00:14:11.650 "nsid": 1, 00:14:11.650 "bdev_name": "Malloc1", 00:14:11.650 "name": "Malloc1", 00:14:11.650 "nguid": "37E3C29C232B471687C8F00A6803C0BD", 00:14:11.650 "uuid": "37e3c29c-232b-4716-87c8-f00a6803c0bd" 00:14:11.650 }, 00:14:11.650 { 00:14:11.650 "nsid": 2, 00:14:11.650 "bdev_name": "Malloc3", 00:14:11.650 "name": "Malloc3", 00:14:11.650 "nguid": "CC42262F49BD445788E322EBEB8581CF", 00:14:11.650 "uuid": "cc42262f-49bd-4457-88e3-22ebeb8581cf" 00:14:11.650 } 00:14:11.650 ] 00:14:11.650 }, 00:14:11.650 { 00:14:11.650 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:11.650 "subtype": "NVMe", 00:14:11.650 "listen_addresses": [ 00:14:11.650 { 00:14:11.650 "trtype": "VFIOUSER", 00:14:11.650 "adrfam": "IPv4", 00:14:11.650 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:11.650 "trsvcid": "0" 00:14:11.650 } 00:14:11.650 ], 00:14:11.650 "allow_any_host": true, 00:14:11.650 "hosts": [], 00:14:11.650 "serial_number": "SPDK2", 00:14:11.650 "model_number": "SPDK bdev Controller", 00:14:11.650 "max_namespaces": 32, 00:14:11.650 "min_cntlid": 1, 00:14:11.650 "max_cntlid": 65519, 00:14:11.650 "namespaces": [ 00:14:11.650 { 00:14:11.650 "nsid": 1, 00:14:11.650 "bdev_name": "Malloc2", 00:14:11.650 "name": "Malloc2", 00:14:11.650 "nguid": "3515EB6851EB457BB1CB7C7E9E5FE4B1", 00:14:11.650 "uuid": "3515eb68-51eb-457b-b1cb-7c7e9e5fe4b1" 00:14:11.650 } 00:14:11.650 ] 00:14:11.650 } 00:14:11.650 ] 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2882090 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:11.650 09:46:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:11.909 [2024-11-20 09:46:35.077372] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:11.909 Malloc4 00:14:11.909 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:12.168 [2024-11-20 09:46:35.310147] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:12.168 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:12.168 Asynchronous Event Request test 00:14:12.168 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:12.168 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:12.168 Registering asynchronous event callbacks... 00:14:12.168 Starting namespace attribute notice tests for all controllers... 00:14:12.168 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:12.168 aer_cb - Changed Namespace 00:14:12.168 Cleaning up... 00:14:12.427 [ 00:14:12.427 { 00:14:12.427 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:12.427 "subtype": "Discovery", 00:14:12.427 "listen_addresses": [], 00:14:12.427 "allow_any_host": true, 00:14:12.427 "hosts": [] 00:14:12.427 }, 00:14:12.427 { 00:14:12.427 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:12.427 "subtype": "NVMe", 00:14:12.427 "listen_addresses": [ 00:14:12.427 { 00:14:12.427 "trtype": "VFIOUSER", 00:14:12.427 "adrfam": "IPv4", 00:14:12.427 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:12.427 "trsvcid": "0" 00:14:12.427 } 00:14:12.427 ], 00:14:12.427 "allow_any_host": true, 00:14:12.427 "hosts": [], 00:14:12.427 "serial_number": "SPDK1", 00:14:12.427 "model_number": "SPDK bdev Controller", 00:14:12.427 "max_namespaces": 32, 00:14:12.427 "min_cntlid": 1, 00:14:12.427 "max_cntlid": 65519, 00:14:12.427 "namespaces": [ 00:14:12.427 { 00:14:12.427 "nsid": 1, 00:14:12.427 "bdev_name": "Malloc1", 00:14:12.427 "name": "Malloc1", 00:14:12.427 "nguid": "37E3C29C232B471687C8F00A6803C0BD", 00:14:12.427 "uuid": "37e3c29c-232b-4716-87c8-f00a6803c0bd" 00:14:12.427 }, 00:14:12.427 { 00:14:12.427 "nsid": 2, 00:14:12.427 "bdev_name": "Malloc3", 00:14:12.427 "name": "Malloc3", 00:14:12.427 "nguid": "CC42262F49BD445788E322EBEB8581CF", 00:14:12.427 "uuid": "cc42262f-49bd-4457-88e3-22ebeb8581cf" 00:14:12.427 } 00:14:12.427 ] 00:14:12.427 }, 00:14:12.427 { 00:14:12.427 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:12.427 "subtype": "NVMe", 00:14:12.427 "listen_addresses": [ 00:14:12.427 { 00:14:12.427 "trtype": "VFIOUSER", 00:14:12.427 "adrfam": "IPv4", 00:14:12.427 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:12.427 "trsvcid": "0" 00:14:12.427 } 00:14:12.427 ], 00:14:12.427 "allow_any_host": true, 00:14:12.427 "hosts": [], 00:14:12.427 "serial_number": "SPDK2", 00:14:12.427 "model_number": "SPDK bdev Controller", 00:14:12.427 "max_namespaces": 32, 00:14:12.427 "min_cntlid": 1, 00:14:12.427 "max_cntlid": 65519, 00:14:12.427 "namespaces": [ 00:14:12.427 { 00:14:12.427 "nsid": 1, 00:14:12.427 "bdev_name": "Malloc2", 00:14:12.427 "name": "Malloc2", 00:14:12.427 "nguid": "3515EB6851EB457BB1CB7C7E9E5FE4B1", 00:14:12.427 "uuid": "3515eb68-51eb-457b-b1cb-7c7e9e5fe4b1" 00:14:12.427 }, 00:14:12.427 { 00:14:12.427 "nsid": 2, 00:14:12.427 "bdev_name": "Malloc4", 00:14:12.427 "name": "Malloc4", 00:14:12.427 "nguid": "3498E56E154044C58A2F30BFB813699B", 00:14:12.427 "uuid": "3498e56e-1540-44c5-8a2f-30bfb813699b" 00:14:12.427 } 00:14:12.427 ] 00:14:12.427 } 00:14:12.427 ] 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2882090 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2874471 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2874471 ']' 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2874471 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874471 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874471' 00:14:12.427 killing process with pid 2874471 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2874471 00:14:12.427 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2874471 00:14:12.686 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2882328 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2882328' 00:14:12.687 Process pid: 2882328 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2882328 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2882328 ']' 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.687 09:46:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:12.687 [2024-11-20 09:46:35.877874] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:12.687 [2024-11-20 09:46:35.878740] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:14:12.687 [2024-11-20 09:46:35.878780] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.687 [2024-11-20 09:46:35.952693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.687 [2024-11-20 09:46:35.990023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.687 [2024-11-20 09:46:35.990074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.687 [2024-11-20 09:46:35.990082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.687 [2024-11-20 09:46:35.990088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.687 [2024-11-20 09:46:35.990093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.687 [2024-11-20 09:46:35.991763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.687 [2024-11-20 09:46:35.991874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.687 [2024-11-20 09:46:35.991992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.687 [2024-11-20 09:46:35.991993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.946 [2024-11-20 09:46:36.059754] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:12.946 [2024-11-20 09:46:36.060882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:12.946 [2024-11-20 09:46:36.061053] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:12.946 [2024-11-20 09:46:36.061243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:12.946 [2024-11-20 09:46:36.061307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:12.946 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.946 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:12.946 09:46:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:13.883 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:14.141 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:14.141 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:14.141 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:14.141 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:14.141 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:14.399 Malloc1 00:14:14.399 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:14.658 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:14.658 09:46:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:14.916 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:14.916 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:14.916 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:15.173 Malloc2 00:14:15.173 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:15.431 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:15.689 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:15.689 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:15.689 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2882328 00:14:15.689 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2882328 ']' 00:14:15.689 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2882328 00:14:15.689 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:15.689 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.690 09:46:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2882328 00:14:15.690 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.690 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.690 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2882328' 00:14:15.690 killing process with pid 2882328 00:14:15.690 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2882328 00:14:15.690 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2882328 00:14:15.948 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:15.948 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:15.948 00:14:15.948 real 0m50.909s 00:14:15.948 user 3m16.903s 00:14:15.948 sys 0m3.316s 00:14:15.948 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.948 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:15.948 ************************************ 00:14:15.948 END TEST nvmf_vfio_user 00:14:15.948 ************************************ 00:14:15.948 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:15.948 09:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.948 09:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.948 09:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.208 ************************************ 00:14:16.208 START TEST nvmf_vfio_user_nvme_compliance 00:14:16.208 ************************************ 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:16.208 * Looking for test storage... 00:14:16.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1703 -- # lcov --version 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:14:16.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.208 --rc genhtml_branch_coverage=1 00:14:16.208 --rc genhtml_function_coverage=1 00:14:16.208 --rc genhtml_legend=1 00:14:16.208 --rc geninfo_all_blocks=1 00:14:16.208 --rc geninfo_unexecuted_blocks=1 00:14:16.208 00:14:16.208 ' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:14:16.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.208 --rc genhtml_branch_coverage=1 00:14:16.208 --rc genhtml_function_coverage=1 00:14:16.208 --rc genhtml_legend=1 00:14:16.208 --rc geninfo_all_blocks=1 00:14:16.208 --rc geninfo_unexecuted_blocks=1 00:14:16.208 00:14:16.208 ' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:14:16.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.208 --rc genhtml_branch_coverage=1 00:14:16.208 --rc genhtml_function_coverage=1 00:14:16.208 --rc genhtml_legend=1 00:14:16.208 --rc geninfo_all_blocks=1 00:14:16.208 --rc geninfo_unexecuted_blocks=1 00:14:16.208 00:14:16.208 ' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:14:16.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.208 --rc genhtml_branch_coverage=1 00:14:16.208 --rc genhtml_function_coverage=1 00:14:16.208 --rc genhtml_legend=1 00:14:16.208 --rc geninfo_all_blocks=1 00:14:16.208 --rc geninfo_unexecuted_blocks=1 00:14:16.208 00:14:16.208 ' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2882976 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2882976' 00:14:16.208 Process pid: 2882976 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2882976 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2882976 ']' 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.208 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:16.468 [2024-11-20 09:46:39.556424] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:14:16.468 [2024-11-20 09:46:39.556476] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.468 [2024-11-20 09:46:39.634229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.468 [2024-11-20 09:46:39.674666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.468 [2024-11-20 09:46:39.674703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.468 [2024-11-20 09:46:39.674711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.468 [2024-11-20 09:46:39.674720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.468 [2024-11-20 09:46:39.674725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.468 [2024-11-20 09:46:39.676132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.468 [2024-11-20 09:46:39.676242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.468 [2024-11-20 09:46:39.676244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.468 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.468 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:16.468 09:46:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.845 malloc0 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.845 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.846 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:17.846 00:14:17.846 00:14:17.846 CUnit - A unit testing framework for C - Version 2.1-3 00:14:17.846 http://cunit.sourceforge.net/ 00:14:17.846 00:14:17.846 00:14:17.846 Suite: nvme_compliance 00:14:17.846 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 09:46:41.024528] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.846 [2024-11-20 09:46:41.025876] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:17.846 [2024-11-20 09:46:41.025893] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:17.846 [2024-11-20 09:46:41.025900] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:17.846 [2024-11-20 09:46:41.027547] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.846 passed 00:14:17.846 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 09:46:41.104112] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:17.846 [2024-11-20 09:46:41.107137] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:17.846 passed 00:14:18.105 Test: admin_identify_ns ...[2024-11-20 09:46:41.187122] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.105 [2024-11-20 09:46:41.246964] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:18.105 [2024-11-20 09:46:41.254959] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:18.105 [2024-11-20 09:46:41.276046] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.105 passed 00:14:18.105 Test: admin_get_features_mandatory_features ...[2024-11-20 09:46:41.353010] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.105 [2024-11-20 09:46:41.358043] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.105 passed 00:14:18.105 Test: admin_get_features_optional_features ...[2024-11-20 09:46:41.433539] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.364 [2024-11-20 09:46:41.436563] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.364 passed 00:14:18.364 Test: admin_set_features_number_of_queues ...[2024-11-20 09:46:41.515432] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.364 [2024-11-20 09:46:41.620036] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.364 passed 00:14:18.623 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 09:46:41.696063] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.623 [2024-11-20 09:46:41.699081] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.623 passed 00:14:18.623 Test: admin_get_log_page_with_lpo ...[2024-11-20 09:46:41.775388] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.623 [2024-11-20 09:46:41.846961] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:18.623 [2024-11-20 09:46:41.860024] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.623 passed 00:14:18.623 Test: fabric_property_get ...[2024-11-20 09:46:41.932963] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.623 [2024-11-20 09:46:41.934205] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:18.624 [2024-11-20 09:46:41.938994] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.882 passed 00:14:18.882 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 09:46:42.013480] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.882 [2024-11-20 09:46:42.014721] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:18.882 [2024-11-20 09:46:42.016500] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.882 passed 00:14:18.882 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 09:46:42.090285] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.882 [2024-11-20 09:46:42.174956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:18.882 [2024-11-20 09:46:42.190959] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:18.882 [2024-11-20 09:46:42.196034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.140 passed 00:14:19.140 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 09:46:42.272087] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.140 [2024-11-20 09:46:42.273334] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:19.140 [2024-11-20 09:46:42.275117] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.140 passed 00:14:19.140 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 09:46:42.352369] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.140 [2024-11-20 09:46:42.431957] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:19.140 [2024-11-20 09:46:42.455959] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:19.140 [2024-11-20 09:46:42.461052] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.399 passed 00:14:19.399 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 09:46:42.533122] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.399 [2024-11-20 09:46:42.534357] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:19.399 [2024-11-20 09:46:42.534382] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:19.399 [2024-11-20 09:46:42.538152] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.399 passed 00:14:19.399 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 09:46:42.613455] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.399 [2024-11-20 09:46:42.704953] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:19.399 [2024-11-20 09:46:42.712955] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:19.399 [2024-11-20 09:46:42.720952] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:19.399 [2024-11-20 09:46:42.728956] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:19.657 [2024-11-20 09:46:42.761056] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.657 passed 00:14:19.657 Test: admin_create_io_sq_verify_pc ...[2024-11-20 09:46:42.833014] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.657 [2024-11-20 09:46:42.850964] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:19.658 [2024-11-20 09:46:42.868208] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.658 passed 00:14:19.658 Test: admin_create_io_qp_max_qps ...[2024-11-20 09:46:42.946761] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.034 [2024-11-20 09:46:44.052958] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:21.292 [2024-11-20 09:46:44.437041] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.292 passed 00:14:21.292 Test: admin_create_io_sq_shared_cq ...[2024-11-20 09:46:44.516158] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.551 [2024-11-20 09:46:44.647955] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:21.551 [2024-11-20 09:46:44.685023] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.551 passed 00:14:21.551 00:14:21.551 Run Summary: Type Total Ran Passed Failed Inactive 00:14:21.551 suites 1 1 n/a 0 0 00:14:21.551 tests 18 18 18 0 0 00:14:21.551 asserts 360 360 360 0 n/a 00:14:21.551 00:14:21.551 Elapsed time = 1.508 seconds 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2882976 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2882976 ']' 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2882976 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2882976 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2882976' 00:14:21.551 killing process with pid 2882976 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2882976 00:14:21.551 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2882976 00:14:21.843 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:21.843 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:21.843 00:14:21.843 real 0m5.672s 00:14:21.843 user 0m15.819s 00:14:21.843 sys 0m0.516s 00:14:21.843 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.843 09:46:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.843 ************************************ 00:14:21.843 END TEST nvmf_vfio_user_nvme_compliance 00:14:21.843 ************************************ 00:14:21.843 09:46:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:21.843 09:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:21.843 09:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.843 09:46:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.843 ************************************ 00:14:21.843 START TEST nvmf_vfio_user_fuzz 00:14:21.843 ************************************ 00:14:21.843 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:21.843 * Looking for test storage... 00:14:21.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.843 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:14:21.843 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1703 -- # lcov --version 00:14:21.843 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:14:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.103 --rc genhtml_branch_coverage=1 00:14:22.103 --rc genhtml_function_coverage=1 00:14:22.103 --rc genhtml_legend=1 00:14:22.103 --rc geninfo_all_blocks=1 00:14:22.103 --rc geninfo_unexecuted_blocks=1 00:14:22.103 00:14:22.103 ' 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:14:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.103 --rc genhtml_branch_coverage=1 00:14:22.103 --rc genhtml_function_coverage=1 00:14:22.103 --rc genhtml_legend=1 00:14:22.103 --rc geninfo_all_blocks=1 00:14:22.103 --rc geninfo_unexecuted_blocks=1 00:14:22.103 00:14:22.103 ' 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:14:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.103 --rc genhtml_branch_coverage=1 00:14:22.103 --rc genhtml_function_coverage=1 00:14:22.103 --rc genhtml_legend=1 00:14:22.103 --rc geninfo_all_blocks=1 00:14:22.103 --rc geninfo_unexecuted_blocks=1 00:14:22.103 00:14:22.103 ' 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:14:22.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.103 --rc genhtml_branch_coverage=1 00:14:22.103 --rc genhtml_function_coverage=1 00:14:22.103 --rc genhtml_legend=1 00:14:22.103 --rc geninfo_all_blocks=1 00:14:22.103 --rc geninfo_unexecuted_blocks=1 00:14:22.103 00:14:22.103 ' 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.103 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2884028 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2884028' 00:14:22.104 Process pid: 2884028 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2884028 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2884028 ']' 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.104 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:22.362 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.362 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:22.362 09:46:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 malloc0 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:23.298 09:46:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:55.383 Fuzzing completed. Shutting down the fuzz application 00:14:55.383 00:14:55.383 Dumping successful admin opcodes: 00:14:55.383 8, 9, 10, 24, 00:14:55.383 Dumping successful io opcodes: 00:14:55.383 0, 00:14:55.383 NS: 0x20000081ef00 I/O qp, Total commands completed: 1015389, total successful commands: 3983, random_seed: 1136535424 00:14:55.383 NS: 0x20000081ef00 admin qp, Total commands completed: 251712, total successful commands: 2034, random_seed: 2538432704 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2884028 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2884028 ']' 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2884028 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2884028 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2884028' 00:14:55.383 killing process with pid 2884028 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2884028 00:14:55.383 09:47:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2884028 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:55.383 00:14:55.383 real 0m32.212s 00:14:55.383 user 0m29.919s 00:14:55.383 sys 0m32.125s 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:55.383 ************************************ 00:14:55.383 END TEST nvmf_vfio_user_fuzz 00:14:55.383 ************************************ 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.383 ************************************ 00:14:55.383 START TEST nvmf_auth_target 00:14:55.383 ************************************ 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:55.383 * Looking for test storage... 00:14:55.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # lcov --version 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:14:55.383 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:14:55.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.384 --rc genhtml_branch_coverage=1 00:14:55.384 --rc genhtml_function_coverage=1 00:14:55.384 --rc genhtml_legend=1 00:14:55.384 --rc geninfo_all_blocks=1 00:14:55.384 --rc geninfo_unexecuted_blocks=1 00:14:55.384 00:14:55.384 ' 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:14:55.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.384 --rc genhtml_branch_coverage=1 00:14:55.384 --rc genhtml_function_coverage=1 00:14:55.384 --rc genhtml_legend=1 00:14:55.384 --rc geninfo_all_blocks=1 00:14:55.384 --rc geninfo_unexecuted_blocks=1 00:14:55.384 00:14:55.384 ' 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:14:55.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.384 --rc genhtml_branch_coverage=1 00:14:55.384 --rc genhtml_function_coverage=1 00:14:55.384 --rc genhtml_legend=1 00:14:55.384 --rc geninfo_all_blocks=1 00:14:55.384 --rc geninfo_unexecuted_blocks=1 00:14:55.384 00:14:55.384 ' 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:14:55.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.384 --rc genhtml_branch_coverage=1 00:14:55.384 --rc genhtml_function_coverage=1 00:14:55.384 --rc genhtml_legend=1 00:14:55.384 --rc geninfo_all_blocks=1 00:14:55.384 --rc geninfo_unexecuted_blocks=1 00:14:55.384 00:14:55.384 ' 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.384 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:55.385 09:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:00.673 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:00.673 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:00.673 Found net devices under 0000:86:00.0: cvl_0_0 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:00.673 Found net devices under 0000:86:00.1: cvl_0_1 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.673 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:00.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:15:00.674 00:15:00.674 --- 10.0.0.2 ping statistics --- 00:15:00.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.674 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:00.674 00:15:00.674 --- 10.0.0.1 ping statistics --- 00:15:00.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.674 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2892386 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2892386 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2892386 ']' 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2892412 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6cdf0d02ffd0128cae5643b38cc0d2c0ebdb815e4e5a4a31 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wvq 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6cdf0d02ffd0128cae5643b38cc0d2c0ebdb815e4e5a4a31 0 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6cdf0d02ffd0128cae5643b38cc0d2c0ebdb815e4e5a4a31 0 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6cdf0d02ffd0128cae5643b38cc0d2c0ebdb815e4e5a4a31 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wvq 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wvq 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wvq 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a87831af2ccaddb0e8c6e33a3eea578a048141911e62c57094053cfdcfae0baf 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ii6 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a87831af2ccaddb0e8c6e33a3eea578a048141911e62c57094053cfdcfae0baf 3 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a87831af2ccaddb0e8c6e33a3eea578a048141911e62c57094053cfdcfae0baf 3 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a87831af2ccaddb0e8c6e33a3eea578a048141911e62c57094053cfdcfae0baf 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ii6 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ii6 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Ii6 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9764aeb6bcdc26f291369c9b2be5f576 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0G4 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9764aeb6bcdc26f291369c9b2be5f576 1 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9764aeb6bcdc26f291369c9b2be5f576 1 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9764aeb6bcdc26f291369c9b2be5f576 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.674 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0G4 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0G4 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.0G4 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=228ef079b8dac9ac519183dacaa152ef0a3698bd72653faa 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4R1 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 228ef079b8dac9ac519183dacaa152ef0a3698bd72653faa 2 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 228ef079b8dac9ac519183dacaa152ef0a3698bd72653faa 2 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=228ef079b8dac9ac519183dacaa152ef0a3698bd72653faa 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:00.675 09:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4R1 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4R1 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.4R1 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=814dd58247e416ba954d877aab5f3b850078a709e1fffaac 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.c2W 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 814dd58247e416ba954d877aab5f3b850078a709e1fffaac 2 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 814dd58247e416ba954d877aab5f3b850078a709e1fffaac 2 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=814dd58247e416ba954d877aab5f3b850078a709e1fffaac 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.c2W 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.c2W 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.c2W 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bbb200b3245b713cab242dcb69af6346 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IHP 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bbb200b3245b713cab242dcb69af6346 1 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bbb200b3245b713cab242dcb69af6346 1 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bbb200b3245b713cab242dcb69af6346 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IHP 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IHP 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.IHP 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:00.934 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a9d11d2f538908f947f612ac2af41f956fd1a6bf01f7197e8a2bf3ae8070d480 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.L5b 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a9d11d2f538908f947f612ac2af41f956fd1a6bf01f7197e8a2bf3ae8070d480 3 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a9d11d2f538908f947f612ac2af41f956fd1a6bf01f7197e8a2bf3ae8070d480 3 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a9d11d2f538908f947f612ac2af41f956fd1a6bf01f7197e8a2bf3ae8070d480 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.L5b 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.L5b 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.L5b 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2892386 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2892386 ']' 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.935 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.194 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.194 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:01.194 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2892412 /var/tmp/host.sock 00:15:01.194 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2892412 ']' 00:15:01.194 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:01.194 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.194 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:01.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:01.194 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.194 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wvq 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wvq 00:15:01.453 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wvq 00:15:01.712 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Ii6 ]] 00:15:01.712 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ii6 00:15:01.712 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.712 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.712 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.712 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ii6 00:15:01.712 09:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ii6 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0G4 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0G4 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0G4 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.4R1 ]] 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4R1 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.972 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4R1 00:15:02.231 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4R1 00:15:02.231 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:02.231 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.c2W 00:15:02.231 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.231 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.231 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.231 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.c2W 00:15:02.231 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.c2W 00:15:02.490 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.IHP ]] 00:15:02.490 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IHP 00:15:02.490 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.490 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.490 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.490 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IHP 00:15:02.490 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IHP 00:15:02.749 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:02.749 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.L5b 00:15:02.749 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.749 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.749 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.749 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.L5b 00:15:02.749 09:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.L5b 00:15:03.009 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:03.009 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:03.009 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.009 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.009 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.009 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.009 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:03.009 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.009 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.010 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.269 00:15:03.269 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.269 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.269 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.529 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.529 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.529 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.529 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.529 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.529 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.529 { 00:15:03.529 "cntlid": 1, 00:15:03.529 "qid": 0, 00:15:03.529 "state": "enabled", 00:15:03.529 "thread": "nvmf_tgt_poll_group_000", 00:15:03.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:03.529 "listen_address": { 00:15:03.529 "trtype": "TCP", 00:15:03.529 "adrfam": "IPv4", 00:15:03.529 "traddr": "10.0.0.2", 00:15:03.529 "trsvcid": "4420" 00:15:03.529 }, 00:15:03.529 "peer_address": { 00:15:03.529 "trtype": "TCP", 00:15:03.529 "adrfam": "IPv4", 00:15:03.529 "traddr": "10.0.0.1", 00:15:03.529 "trsvcid": "35396" 00:15:03.529 }, 00:15:03.529 "auth": { 00:15:03.529 "state": "completed", 00:15:03.529 "digest": "sha256", 00:15:03.529 "dhgroup": "null" 00:15:03.529 } 00:15:03.529 } 00:15:03.529 ]' 00:15:03.529 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.529 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.529 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.788 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:03.788 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.788 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.788 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.788 09:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.788 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:03.788 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:04.358 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.618 09:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.878 00:15:04.878 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.878 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.878 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.138 { 00:15:05.138 "cntlid": 3, 00:15:05.138 "qid": 0, 00:15:05.138 "state": "enabled", 00:15:05.138 "thread": "nvmf_tgt_poll_group_000", 00:15:05.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:05.138 "listen_address": { 00:15:05.138 "trtype": "TCP", 00:15:05.138 "adrfam": "IPv4", 00:15:05.138 "traddr": "10.0.0.2", 00:15:05.138 "trsvcid": "4420" 00:15:05.138 }, 00:15:05.138 "peer_address": { 00:15:05.138 "trtype": "TCP", 00:15:05.138 "adrfam": "IPv4", 00:15:05.138 "traddr": "10.0.0.1", 00:15:05.138 "trsvcid": "35428" 00:15:05.138 }, 00:15:05.138 "auth": { 00:15:05.138 "state": "completed", 00:15:05.138 "digest": "sha256", 00:15:05.138 "dhgroup": "null" 00:15:05.138 } 00:15:05.138 } 00:15:05.138 ]' 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:05.138 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.397 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.397 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.397 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.397 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:05.397 09:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:05.965 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.965 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.965 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.965 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.965 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.965 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.965 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.965 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.224 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.483 00:15:06.483 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.483 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.483 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.741 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.742 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.742 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.742 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.742 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.742 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.742 { 00:15:06.742 "cntlid": 5, 00:15:06.742 "qid": 0, 00:15:06.742 "state": "enabled", 00:15:06.742 "thread": "nvmf_tgt_poll_group_000", 00:15:06.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:06.742 "listen_address": { 00:15:06.742 "trtype": "TCP", 00:15:06.742 "adrfam": "IPv4", 00:15:06.742 "traddr": "10.0.0.2", 00:15:06.742 "trsvcid": "4420" 00:15:06.742 }, 00:15:06.742 "peer_address": { 00:15:06.742 "trtype": "TCP", 00:15:06.742 "adrfam": "IPv4", 00:15:06.742 "traddr": "10.0.0.1", 00:15:06.742 "trsvcid": "35448" 00:15:06.742 }, 00:15:06.742 "auth": { 00:15:06.742 "state": "completed", 00:15:06.742 "digest": "sha256", 00:15:06.742 "dhgroup": "null" 00:15:06.742 } 00:15:06.742 } 00:15:06.742 ]' 00:15:06.742 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.742 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.742 09:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.742 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.742 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.742 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.742 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.742 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.000 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:07.000 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:07.568 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.568 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.568 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.568 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.568 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.568 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.568 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.568 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:07.827 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.085 00:15:08.085 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.085 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.085 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.345 { 00:15:08.345 "cntlid": 7, 00:15:08.345 "qid": 0, 00:15:08.345 "state": "enabled", 00:15:08.345 "thread": "nvmf_tgt_poll_group_000", 00:15:08.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:08.345 "listen_address": { 00:15:08.345 "trtype": "TCP", 00:15:08.345 "adrfam": "IPv4", 00:15:08.345 "traddr": "10.0.0.2", 00:15:08.345 "trsvcid": "4420" 00:15:08.345 }, 00:15:08.345 "peer_address": { 00:15:08.345 "trtype": "TCP", 00:15:08.345 "adrfam": "IPv4", 00:15:08.345 "traddr": "10.0.0.1", 00:15:08.345 "trsvcid": "35464" 00:15:08.345 }, 00:15:08.345 "auth": { 00:15:08.345 "state": "completed", 00:15:08.345 "digest": "sha256", 00:15:08.345 "dhgroup": "null" 00:15:08.345 } 00:15:08.345 } 00:15:08.345 ]' 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.345 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.603 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:08.603 09:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:09.171 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.172 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.172 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.172 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.172 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.172 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.172 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.172 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.172 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.430 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:09.430 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.430 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.430 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:09.430 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:09.431 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.431 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.431 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.431 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.431 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.431 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.431 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.431 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.689 00:15:09.689 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.689 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.689 09:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.948 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.948 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.948 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.948 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.948 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.948 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.948 { 00:15:09.948 "cntlid": 9, 00:15:09.948 "qid": 0, 00:15:09.948 "state": "enabled", 00:15:09.948 "thread": "nvmf_tgt_poll_group_000", 00:15:09.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:09.948 "listen_address": { 00:15:09.948 "trtype": "TCP", 00:15:09.948 "adrfam": "IPv4", 00:15:09.948 "traddr": "10.0.0.2", 00:15:09.948 "trsvcid": "4420" 00:15:09.948 }, 00:15:09.948 "peer_address": { 00:15:09.948 "trtype": "TCP", 00:15:09.948 "adrfam": "IPv4", 00:15:09.949 "traddr": "10.0.0.1", 00:15:09.949 "trsvcid": "35494" 00:15:09.949 }, 00:15:09.949 "auth": { 00:15:09.949 "state": "completed", 00:15:09.949 "digest": "sha256", 00:15:09.949 "dhgroup": "ffdhe2048" 00:15:09.949 } 00:15:09.949 } 00:15:09.949 ]' 00:15:09.949 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.949 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.949 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.949 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.949 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.949 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.949 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.949 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.208 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:10.208 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:10.776 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.776 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.776 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.776 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.776 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.776 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.776 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.776 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.034 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.035 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.293 00:15:11.293 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.293 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.293 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.553 { 00:15:11.553 "cntlid": 11, 00:15:11.553 "qid": 0, 00:15:11.553 "state": "enabled", 00:15:11.553 "thread": "nvmf_tgt_poll_group_000", 00:15:11.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:11.553 "listen_address": { 00:15:11.553 "trtype": "TCP", 00:15:11.553 "adrfam": "IPv4", 00:15:11.553 "traddr": "10.0.0.2", 00:15:11.553 "trsvcid": "4420" 00:15:11.553 }, 00:15:11.553 "peer_address": { 00:15:11.553 "trtype": "TCP", 00:15:11.553 "adrfam": "IPv4", 00:15:11.553 "traddr": "10.0.0.1", 00:15:11.553 "trsvcid": "40632" 00:15:11.553 }, 00:15:11.553 "auth": { 00:15:11.553 "state": "completed", 00:15:11.553 "digest": "sha256", 00:15:11.553 "dhgroup": "ffdhe2048" 00:15:11.553 } 00:15:11.553 } 00:15:11.553 ]' 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.553 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.813 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:11.813 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:12.381 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.381 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.381 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.381 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.381 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.381 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.381 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.381 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.640 09:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.898 00:15:12.898 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.898 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.898 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.158 { 00:15:13.158 "cntlid": 13, 00:15:13.158 "qid": 0, 00:15:13.158 "state": "enabled", 00:15:13.158 "thread": "nvmf_tgt_poll_group_000", 00:15:13.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:13.158 "listen_address": { 00:15:13.158 "trtype": "TCP", 00:15:13.158 "adrfam": "IPv4", 00:15:13.158 "traddr": "10.0.0.2", 00:15:13.158 "trsvcid": "4420" 00:15:13.158 }, 00:15:13.158 "peer_address": { 00:15:13.158 "trtype": "TCP", 00:15:13.158 "adrfam": "IPv4", 00:15:13.158 "traddr": "10.0.0.1", 00:15:13.158 "trsvcid": "40660" 00:15:13.158 }, 00:15:13.158 "auth": { 00:15:13.158 "state": "completed", 00:15:13.158 "digest": "sha256", 00:15:13.158 "dhgroup": "ffdhe2048" 00:15:13.158 } 00:15:13.158 } 00:15:13.158 ]' 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.158 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.416 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:13.417 09:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:13.985 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.985 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.985 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.985 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.985 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.985 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.985 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:13.985 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.244 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.525 00:15:14.525 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.525 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.525 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.835 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.835 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.835 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.835 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.835 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.835 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.835 { 00:15:14.835 "cntlid": 15, 00:15:14.835 "qid": 0, 00:15:14.835 "state": "enabled", 00:15:14.835 "thread": "nvmf_tgt_poll_group_000", 00:15:14.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.835 "listen_address": { 00:15:14.835 "trtype": "TCP", 00:15:14.835 "adrfam": "IPv4", 00:15:14.835 "traddr": "10.0.0.2", 00:15:14.835 "trsvcid": "4420" 00:15:14.835 }, 00:15:14.835 "peer_address": { 00:15:14.835 "trtype": "TCP", 00:15:14.835 "adrfam": "IPv4", 00:15:14.835 "traddr": "10.0.0.1", 00:15:14.835 "trsvcid": "40680" 00:15:14.835 }, 00:15:14.835 "auth": { 00:15:14.835 "state": "completed", 00:15:14.835 "digest": "sha256", 00:15:14.835 "dhgroup": "ffdhe2048" 00:15:14.835 } 00:15:14.835 } 00:15:14.835 ]' 00:15:14.835 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.836 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.836 09:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.836 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:14.836 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.836 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.836 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.836 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.125 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:15.125 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:15.692 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.692 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.692 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.692 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.692 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.693 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.693 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.693 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.693 09:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.952 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.212 00:15:16.212 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.212 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.212 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.472 { 00:15:16.472 "cntlid": 17, 00:15:16.472 "qid": 0, 00:15:16.472 "state": "enabled", 00:15:16.472 "thread": "nvmf_tgt_poll_group_000", 00:15:16.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:16.472 "listen_address": { 00:15:16.472 "trtype": "TCP", 00:15:16.472 "adrfam": "IPv4", 00:15:16.472 "traddr": "10.0.0.2", 00:15:16.472 "trsvcid": "4420" 00:15:16.472 }, 00:15:16.472 "peer_address": { 00:15:16.472 "trtype": "TCP", 00:15:16.472 "adrfam": "IPv4", 00:15:16.472 "traddr": "10.0.0.1", 00:15:16.472 "trsvcid": "40708" 00:15:16.472 }, 00:15:16.472 "auth": { 00:15:16.472 "state": "completed", 00:15:16.472 "digest": "sha256", 00:15:16.472 "dhgroup": "ffdhe3072" 00:15:16.472 } 00:15:16.472 } 00:15:16.472 ]' 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.472 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.473 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.473 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.473 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.473 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.732 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:16.732 09:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:17.299 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.300 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.300 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.300 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.300 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.300 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.300 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.300 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.559 09:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.818 00:15:17.818 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.818 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.818 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.076 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.076 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.076 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.076 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.077 { 00:15:18.077 "cntlid": 19, 00:15:18.077 "qid": 0, 00:15:18.077 "state": "enabled", 00:15:18.077 "thread": "nvmf_tgt_poll_group_000", 00:15:18.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.077 "listen_address": { 00:15:18.077 "trtype": "TCP", 00:15:18.077 "adrfam": "IPv4", 00:15:18.077 "traddr": "10.0.0.2", 00:15:18.077 "trsvcid": "4420" 00:15:18.077 }, 00:15:18.077 "peer_address": { 00:15:18.077 "trtype": "TCP", 00:15:18.077 "adrfam": "IPv4", 00:15:18.077 "traddr": "10.0.0.1", 00:15:18.077 "trsvcid": "40734" 00:15:18.077 }, 00:15:18.077 "auth": { 00:15:18.077 "state": "completed", 00:15:18.077 "digest": "sha256", 00:15:18.077 "dhgroup": "ffdhe3072" 00:15:18.077 } 00:15:18.077 } 00:15:18.077 ]' 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.077 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.335 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:18.335 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:18.904 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.904 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.904 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.904 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.904 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.904 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.904 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.904 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.162 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.421 00:15:19.421 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.421 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.421 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.680 { 00:15:19.680 "cntlid": 21, 00:15:19.680 "qid": 0, 00:15:19.680 "state": "enabled", 00:15:19.680 "thread": "nvmf_tgt_poll_group_000", 00:15:19.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:19.680 "listen_address": { 00:15:19.680 "trtype": "TCP", 00:15:19.680 "adrfam": "IPv4", 00:15:19.680 "traddr": "10.0.0.2", 00:15:19.680 "trsvcid": "4420" 00:15:19.680 }, 00:15:19.680 "peer_address": { 00:15:19.680 "trtype": "TCP", 00:15:19.680 "adrfam": "IPv4", 00:15:19.680 "traddr": "10.0.0.1", 00:15:19.680 "trsvcid": "40752" 00:15:19.680 }, 00:15:19.680 "auth": { 00:15:19.680 "state": "completed", 00:15:19.680 "digest": "sha256", 00:15:19.680 "dhgroup": "ffdhe3072" 00:15:19.680 } 00:15:19.680 } 00:15:19.680 ]' 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.680 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.939 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:19.939 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:20.507 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.507 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.507 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.507 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.507 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.507 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.507 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.507 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.766 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.026 00:15:21.026 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.026 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.026 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.285 { 00:15:21.285 "cntlid": 23, 00:15:21.285 "qid": 0, 00:15:21.285 "state": "enabled", 00:15:21.285 "thread": "nvmf_tgt_poll_group_000", 00:15:21.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:21.285 "listen_address": { 00:15:21.285 "trtype": "TCP", 00:15:21.285 "adrfam": "IPv4", 00:15:21.285 "traddr": "10.0.0.2", 00:15:21.285 "trsvcid": "4420" 00:15:21.285 }, 00:15:21.285 "peer_address": { 00:15:21.285 "trtype": "TCP", 00:15:21.285 "adrfam": "IPv4", 00:15:21.285 "traddr": "10.0.0.1", 00:15:21.285 "trsvcid": "58838" 00:15:21.285 }, 00:15:21.285 "auth": { 00:15:21.285 "state": "completed", 00:15:21.285 "digest": "sha256", 00:15:21.285 "dhgroup": "ffdhe3072" 00:15:21.285 } 00:15:21.285 } 00:15:21.285 ]' 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.285 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.543 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:21.543 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:22.109 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.109 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.109 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.109 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.109 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.109 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.109 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.109 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.109 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.368 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.626 00:15:22.626 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.626 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.626 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.888 { 00:15:22.888 "cntlid": 25, 00:15:22.888 "qid": 0, 00:15:22.888 "state": "enabled", 00:15:22.888 "thread": "nvmf_tgt_poll_group_000", 00:15:22.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.888 "listen_address": { 00:15:22.888 "trtype": "TCP", 00:15:22.888 "adrfam": "IPv4", 00:15:22.888 "traddr": "10.0.0.2", 00:15:22.888 "trsvcid": "4420" 00:15:22.888 }, 00:15:22.888 "peer_address": { 00:15:22.888 "trtype": "TCP", 00:15:22.888 "adrfam": "IPv4", 00:15:22.888 "traddr": "10.0.0.1", 00:15:22.888 "trsvcid": "58852" 00:15:22.888 }, 00:15:22.888 "auth": { 00:15:22.888 "state": "completed", 00:15:22.888 "digest": "sha256", 00:15:22.888 "dhgroup": "ffdhe4096" 00:15:22.888 } 00:15:22.888 } 00:15:22.888 ]' 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.888 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.158 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:23.158 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:23.724 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.724 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.724 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.724 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.724 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.724 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.724 09:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.983 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.242 00:15:24.242 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.242 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.242 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.501 { 00:15:24.501 "cntlid": 27, 00:15:24.501 "qid": 0, 00:15:24.501 "state": "enabled", 00:15:24.501 "thread": "nvmf_tgt_poll_group_000", 00:15:24.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:24.501 "listen_address": { 00:15:24.501 "trtype": "TCP", 00:15:24.501 "adrfam": "IPv4", 00:15:24.501 "traddr": "10.0.0.2", 00:15:24.501 "trsvcid": "4420" 00:15:24.501 }, 00:15:24.501 "peer_address": { 00:15:24.501 "trtype": "TCP", 00:15:24.501 "adrfam": "IPv4", 00:15:24.501 "traddr": "10.0.0.1", 00:15:24.501 "trsvcid": "58870" 00:15:24.501 }, 00:15:24.501 "auth": { 00:15:24.501 "state": "completed", 00:15:24.501 "digest": "sha256", 00:15:24.501 "dhgroup": "ffdhe4096" 00:15:24.501 } 00:15:24.501 } 00:15:24.501 ]' 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.501 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.760 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:24.760 09:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:25.328 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.328 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.328 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.328 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.328 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.328 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.328 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.328 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.588 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.847 00:15:25.847 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.847 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.847 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.104 { 00:15:26.104 "cntlid": 29, 00:15:26.104 "qid": 0, 00:15:26.104 "state": "enabled", 00:15:26.104 "thread": "nvmf_tgt_poll_group_000", 00:15:26.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:26.104 "listen_address": { 00:15:26.104 "trtype": "TCP", 00:15:26.104 "adrfam": "IPv4", 00:15:26.104 "traddr": "10.0.0.2", 00:15:26.104 "trsvcid": "4420" 00:15:26.104 }, 00:15:26.104 "peer_address": { 00:15:26.104 "trtype": "TCP", 00:15:26.104 "adrfam": "IPv4", 00:15:26.104 "traddr": "10.0.0.1", 00:15:26.104 "trsvcid": "58886" 00:15:26.104 }, 00:15:26.104 "auth": { 00:15:26.104 "state": "completed", 00:15:26.104 "digest": "sha256", 00:15:26.104 "dhgroup": "ffdhe4096" 00:15:26.104 } 00:15:26.104 } 00:15:26.104 ]' 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.104 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.362 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:26.362 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:26.929 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.930 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.930 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.930 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.930 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.930 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.930 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.930 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.189 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.448 00:15:27.448 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.448 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.448 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.708 { 00:15:27.708 "cntlid": 31, 00:15:27.708 "qid": 0, 00:15:27.708 "state": "enabled", 00:15:27.708 "thread": "nvmf_tgt_poll_group_000", 00:15:27.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.708 "listen_address": { 00:15:27.708 "trtype": "TCP", 00:15:27.708 "adrfam": "IPv4", 00:15:27.708 "traddr": "10.0.0.2", 00:15:27.708 "trsvcid": "4420" 00:15:27.708 }, 00:15:27.708 "peer_address": { 00:15:27.708 "trtype": "TCP", 00:15:27.708 "adrfam": "IPv4", 00:15:27.708 "traddr": "10.0.0.1", 00:15:27.708 "trsvcid": "58916" 00:15:27.708 }, 00:15:27.708 "auth": { 00:15:27.708 "state": "completed", 00:15:27.708 "digest": "sha256", 00:15:27.708 "dhgroup": "ffdhe4096" 00:15:27.708 } 00:15:27.708 } 00:15:27.708 ]' 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.708 09:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.708 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.708 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.708 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.968 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:27.968 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:28.536 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.536 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.536 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.536 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.536 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.536 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.536 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.536 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.536 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.795 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.054 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.313 { 00:15:29.313 "cntlid": 33, 00:15:29.313 "qid": 0, 00:15:29.313 "state": "enabled", 00:15:29.313 "thread": "nvmf_tgt_poll_group_000", 00:15:29.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:29.313 "listen_address": { 00:15:29.313 "trtype": "TCP", 00:15:29.313 "adrfam": "IPv4", 00:15:29.313 "traddr": "10.0.0.2", 00:15:29.313 "trsvcid": "4420" 00:15:29.313 }, 00:15:29.313 "peer_address": { 00:15:29.313 "trtype": "TCP", 00:15:29.313 "adrfam": "IPv4", 00:15:29.313 "traddr": "10.0.0.1", 00:15:29.313 "trsvcid": "58944" 00:15:29.313 }, 00:15:29.313 "auth": { 00:15:29.313 "state": "completed", 00:15:29.313 "digest": "sha256", 00:15:29.313 "dhgroup": "ffdhe6144" 00:15:29.313 } 00:15:29.313 } 00:15:29.313 ]' 00:15:29.313 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.573 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.573 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.573 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.573 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.573 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.573 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.573 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.831 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:29.832 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.399 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.657 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.658 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.658 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.658 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.917 00:15:30.917 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.917 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.917 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.176 { 00:15:31.176 "cntlid": 35, 00:15:31.176 "qid": 0, 00:15:31.176 "state": "enabled", 00:15:31.176 "thread": "nvmf_tgt_poll_group_000", 00:15:31.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:31.176 "listen_address": { 00:15:31.176 "trtype": "TCP", 00:15:31.176 "adrfam": "IPv4", 00:15:31.176 "traddr": "10.0.0.2", 00:15:31.176 "trsvcid": "4420" 00:15:31.176 }, 00:15:31.176 "peer_address": { 00:15:31.176 "trtype": "TCP", 00:15:31.176 "adrfam": "IPv4", 00:15:31.176 "traddr": "10.0.0.1", 00:15:31.176 "trsvcid": "38182" 00:15:31.176 }, 00:15:31.176 "auth": { 00:15:31.176 "state": "completed", 00:15:31.176 "digest": "sha256", 00:15:31.176 "dhgroup": "ffdhe6144" 00:15:31.176 } 00:15:31.176 } 00:15:31.176 ]' 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.176 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.435 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:31.435 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:32.003 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.003 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.003 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.003 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.003 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.003 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.003 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.003 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.263 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.522 00:15:32.522 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.522 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.522 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.781 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.781 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.781 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.781 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.781 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.781 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.781 { 00:15:32.781 "cntlid": 37, 00:15:32.781 "qid": 0, 00:15:32.781 "state": "enabled", 00:15:32.781 "thread": "nvmf_tgt_poll_group_000", 00:15:32.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.781 "listen_address": { 00:15:32.781 "trtype": "TCP", 00:15:32.781 "adrfam": "IPv4", 00:15:32.781 "traddr": "10.0.0.2", 00:15:32.781 "trsvcid": "4420" 00:15:32.781 }, 00:15:32.781 "peer_address": { 00:15:32.781 "trtype": "TCP", 00:15:32.781 "adrfam": "IPv4", 00:15:32.781 "traddr": "10.0.0.1", 00:15:32.781 "trsvcid": "38200" 00:15:32.781 }, 00:15:32.781 "auth": { 00:15:32.781 "state": "completed", 00:15:32.781 "digest": "sha256", 00:15:32.781 "dhgroup": "ffdhe6144" 00:15:32.781 } 00:15:32.781 } 00:15:32.781 ]' 00:15:32.781 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.781 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.781 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.781 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.781 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.781 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.781 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.781 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.039 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:33.039 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:33.607 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.607 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.607 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.607 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.607 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.607 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.607 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.607 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.866 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.125 00:15:34.125 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.125 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.125 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.385 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.385 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.385 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.385 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.385 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.385 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.385 { 00:15:34.385 "cntlid": 39, 00:15:34.385 "qid": 0, 00:15:34.385 "state": "enabled", 00:15:34.385 "thread": "nvmf_tgt_poll_group_000", 00:15:34.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:34.385 "listen_address": { 00:15:34.385 "trtype": "TCP", 00:15:34.385 "adrfam": "IPv4", 00:15:34.385 "traddr": "10.0.0.2", 00:15:34.385 "trsvcid": "4420" 00:15:34.385 }, 00:15:34.385 "peer_address": { 00:15:34.385 "trtype": "TCP", 00:15:34.385 "adrfam": "IPv4", 00:15:34.385 "traddr": "10.0.0.1", 00:15:34.385 "trsvcid": "38214" 00:15:34.385 }, 00:15:34.385 "auth": { 00:15:34.385 "state": "completed", 00:15:34.385 "digest": "sha256", 00:15:34.385 "dhgroup": "ffdhe6144" 00:15:34.385 } 00:15:34.385 } 00:15:34.385 ]' 00:15:34.385 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.385 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.385 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.644 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.644 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.644 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.644 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.644 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.903 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:34.903 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.472 09:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.041 00:15:36.041 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.041 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.041 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.301 { 00:15:36.301 "cntlid": 41, 00:15:36.301 "qid": 0, 00:15:36.301 "state": "enabled", 00:15:36.301 "thread": "nvmf_tgt_poll_group_000", 00:15:36.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:36.301 "listen_address": { 00:15:36.301 "trtype": "TCP", 00:15:36.301 "adrfam": "IPv4", 00:15:36.301 "traddr": "10.0.0.2", 00:15:36.301 "trsvcid": "4420" 00:15:36.301 }, 00:15:36.301 "peer_address": { 00:15:36.301 "trtype": "TCP", 00:15:36.301 "adrfam": "IPv4", 00:15:36.301 "traddr": "10.0.0.1", 00:15:36.301 "trsvcid": "38256" 00:15:36.301 }, 00:15:36.301 "auth": { 00:15:36.301 "state": "completed", 00:15:36.301 "digest": "sha256", 00:15:36.301 "dhgroup": "ffdhe8192" 00:15:36.301 } 00:15:36.301 } 00:15:36.301 ]' 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.301 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.560 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:36.560 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:37.129 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.129 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.129 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.129 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.129 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.129 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.129 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:37.129 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.389 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.957 00:15:37.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.216 { 00:15:38.216 "cntlid": 43, 00:15:38.216 "qid": 0, 00:15:38.216 "state": "enabled", 00:15:38.216 "thread": "nvmf_tgt_poll_group_000", 00:15:38.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:38.216 "listen_address": { 00:15:38.216 "trtype": "TCP", 00:15:38.216 "adrfam": "IPv4", 00:15:38.216 "traddr": "10.0.0.2", 00:15:38.216 "trsvcid": "4420" 00:15:38.216 }, 00:15:38.216 "peer_address": { 00:15:38.216 "trtype": "TCP", 00:15:38.216 "adrfam": "IPv4", 00:15:38.216 "traddr": "10.0.0.1", 00:15:38.216 "trsvcid": "38290" 00:15:38.216 }, 00:15:38.216 "auth": { 00:15:38.216 "state": "completed", 00:15:38.216 "digest": "sha256", 00:15:38.216 "dhgroup": "ffdhe8192" 00:15:38.216 } 00:15:38.216 } 00:15:38.216 ]' 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.216 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.474 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:38.474 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:39.041 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.041 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.041 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.041 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.041 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.041 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.042 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.042 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.301 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.869 00:15:39.869 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.869 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.869 09:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.869 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.869 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.869 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.869 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.869 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.869 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.869 { 00:15:39.869 "cntlid": 45, 00:15:39.869 "qid": 0, 00:15:39.869 "state": "enabled", 00:15:39.869 "thread": "nvmf_tgt_poll_group_000", 00:15:39.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.869 "listen_address": { 00:15:39.869 "trtype": "TCP", 00:15:39.869 "adrfam": "IPv4", 00:15:39.869 "traddr": "10.0.0.2", 00:15:39.869 "trsvcid": "4420" 00:15:39.869 }, 00:15:39.869 "peer_address": { 00:15:39.869 "trtype": "TCP", 00:15:39.869 "adrfam": "IPv4", 00:15:39.869 "traddr": "10.0.0.1", 00:15:39.869 "trsvcid": "38336" 00:15:39.869 }, 00:15:39.869 "auth": { 00:15:39.869 "state": "completed", 00:15:39.869 "digest": "sha256", 00:15:39.869 "dhgroup": "ffdhe8192" 00:15:39.869 } 00:15:39.869 } 00:15:39.869 ]' 00:15:39.869 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.869 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.869 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.128 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.128 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.128 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.128 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.128 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:40.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.957 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.525 00:15:41.525 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.525 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.525 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.784 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.784 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.784 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.784 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.784 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.784 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.784 { 00:15:41.784 "cntlid": 47, 00:15:41.784 "qid": 0, 00:15:41.784 "state": "enabled", 00:15:41.784 "thread": "nvmf_tgt_poll_group_000", 00:15:41.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.784 "listen_address": { 00:15:41.784 "trtype": "TCP", 00:15:41.784 "adrfam": "IPv4", 00:15:41.784 "traddr": "10.0.0.2", 00:15:41.784 "trsvcid": "4420" 00:15:41.784 }, 00:15:41.784 "peer_address": { 00:15:41.784 "trtype": "TCP", 00:15:41.784 "adrfam": "IPv4", 00:15:41.784 "traddr": "10.0.0.1", 00:15:41.784 "trsvcid": "36198" 00:15:41.784 }, 00:15:41.784 "auth": { 00:15:41.784 "state": "completed", 00:15:41.784 "digest": "sha256", 00:15:41.784 "dhgroup": "ffdhe8192" 00:15:41.784 } 00:15:41.784 } 00:15:41.784 ]' 00:15:41.784 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.784 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.784 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.784 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.784 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.044 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.044 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.044 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.044 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:42.044 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.612 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.871 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.872 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.131 00:15:43.131 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.131 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.131 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.390 { 00:15:43.390 "cntlid": 49, 00:15:43.390 "qid": 0, 00:15:43.390 "state": "enabled", 00:15:43.390 "thread": "nvmf_tgt_poll_group_000", 00:15:43.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.390 "listen_address": { 00:15:43.390 "trtype": "TCP", 00:15:43.390 "adrfam": "IPv4", 00:15:43.390 "traddr": "10.0.0.2", 00:15:43.390 "trsvcid": "4420" 00:15:43.390 }, 00:15:43.390 "peer_address": { 00:15:43.390 "trtype": "TCP", 00:15:43.390 "adrfam": "IPv4", 00:15:43.390 "traddr": "10.0.0.1", 00:15:43.390 "trsvcid": "36230" 00:15:43.390 }, 00:15:43.390 "auth": { 00:15:43.390 "state": "completed", 00:15:43.390 "digest": "sha384", 00:15:43.390 "dhgroup": "null" 00:15:43.390 } 00:15:43.390 } 00:15:43.390 ]' 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:43.390 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.650 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.650 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.650 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.650 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:43.650 09:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:44.218 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.218 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.218 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.218 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.218 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.218 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.218 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:44.218 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.477 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.736 00:15:44.736 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.736 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.736 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.995 { 00:15:44.995 "cntlid": 51, 00:15:44.995 "qid": 0, 00:15:44.995 "state": "enabled", 00:15:44.995 "thread": "nvmf_tgt_poll_group_000", 00:15:44.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.995 "listen_address": { 00:15:44.995 "trtype": "TCP", 00:15:44.995 "adrfam": "IPv4", 00:15:44.995 "traddr": "10.0.0.2", 00:15:44.995 "trsvcid": "4420" 00:15:44.995 }, 00:15:44.995 "peer_address": { 00:15:44.995 "trtype": "TCP", 00:15:44.995 "adrfam": "IPv4", 00:15:44.995 "traddr": "10.0.0.1", 00:15:44.995 "trsvcid": "36270" 00:15:44.995 }, 00:15:44.995 "auth": { 00:15:44.995 "state": "completed", 00:15:44.995 "digest": "sha384", 00:15:44.995 "dhgroup": "null" 00:15:44.995 } 00:15:44.995 } 00:15:44.995 ]' 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.995 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.253 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.253 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.254 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.254 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:45.254 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:45.821 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.080 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.340 00:15:46.340 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.340 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.340 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.600 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.600 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.600 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.600 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.600 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.600 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.600 { 00:15:46.600 "cntlid": 53, 00:15:46.600 "qid": 0, 00:15:46.600 "state": "enabled", 00:15:46.600 "thread": "nvmf_tgt_poll_group_000", 00:15:46.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.600 "listen_address": { 00:15:46.600 "trtype": "TCP", 00:15:46.600 "adrfam": "IPv4", 00:15:46.600 "traddr": "10.0.0.2", 00:15:46.600 "trsvcid": "4420" 00:15:46.600 }, 00:15:46.600 "peer_address": { 00:15:46.600 "trtype": "TCP", 00:15:46.600 "adrfam": "IPv4", 00:15:46.600 "traddr": "10.0.0.1", 00:15:46.600 "trsvcid": "36288" 00:15:46.600 }, 00:15:46.600 "auth": { 00:15:46.600 "state": "completed", 00:15:46.600 "digest": "sha384", 00:15:46.600 "dhgroup": "null" 00:15:46.600 } 00:15:46.600 } 00:15:46.600 ]' 00:15:46.600 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.600 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.600 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.859 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:46.859 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.859 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.859 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.859 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.859 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:46.859 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.795 09:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.053 00:15:48.053 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.053 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.053 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.312 { 00:15:48.312 "cntlid": 55, 00:15:48.312 "qid": 0, 00:15:48.312 "state": "enabled", 00:15:48.312 "thread": "nvmf_tgt_poll_group_000", 00:15:48.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.312 "listen_address": { 00:15:48.312 "trtype": "TCP", 00:15:48.312 "adrfam": "IPv4", 00:15:48.312 "traddr": "10.0.0.2", 00:15:48.312 "trsvcid": "4420" 00:15:48.312 }, 00:15:48.312 "peer_address": { 00:15:48.312 "trtype": "TCP", 00:15:48.312 "adrfam": "IPv4", 00:15:48.312 "traddr": "10.0.0.1", 00:15:48.312 "trsvcid": "36316" 00:15:48.312 }, 00:15:48.312 "auth": { 00:15:48.312 "state": "completed", 00:15:48.312 "digest": "sha384", 00:15:48.312 "dhgroup": "null" 00:15:48.312 } 00:15:48.312 } 00:15:48.312 ]' 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.312 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.571 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:48.571 09:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:49.140 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.140 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.140 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.140 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.140 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.140 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.140 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.140 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:49.140 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.399 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.658 00:15:49.658 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.658 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.658 09:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.917 { 00:15:49.917 "cntlid": 57, 00:15:49.917 "qid": 0, 00:15:49.917 "state": "enabled", 00:15:49.917 "thread": "nvmf_tgt_poll_group_000", 00:15:49.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.917 "listen_address": { 00:15:49.917 "trtype": "TCP", 00:15:49.917 "adrfam": "IPv4", 00:15:49.917 "traddr": "10.0.0.2", 00:15:49.917 "trsvcid": "4420" 00:15:49.917 }, 00:15:49.917 "peer_address": { 00:15:49.917 "trtype": "TCP", 00:15:49.917 "adrfam": "IPv4", 00:15:49.917 "traddr": "10.0.0.1", 00:15:49.917 "trsvcid": "36336" 00:15:49.917 }, 00:15:49.917 "auth": { 00:15:49.917 "state": "completed", 00:15:49.917 "digest": "sha384", 00:15:49.917 "dhgroup": "ffdhe2048" 00:15:49.917 } 00:15:49.917 } 00:15:49.917 ]' 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.917 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.175 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:50.175 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:50.743 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.743 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.743 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.743 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.743 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.743 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.743 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.743 09:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.002 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.261 00:15:51.261 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.261 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.261 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.520 { 00:15:51.520 "cntlid": 59, 00:15:51.520 "qid": 0, 00:15:51.520 "state": "enabled", 00:15:51.520 "thread": "nvmf_tgt_poll_group_000", 00:15:51.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.520 "listen_address": { 00:15:51.520 "trtype": "TCP", 00:15:51.520 "adrfam": "IPv4", 00:15:51.520 "traddr": "10.0.0.2", 00:15:51.520 "trsvcid": "4420" 00:15:51.520 }, 00:15:51.520 "peer_address": { 00:15:51.520 "trtype": "TCP", 00:15:51.520 "adrfam": "IPv4", 00:15:51.520 "traddr": "10.0.0.1", 00:15:51.520 "trsvcid": "48248" 00:15:51.520 }, 00:15:51.520 "auth": { 00:15:51.520 "state": "completed", 00:15:51.520 "digest": "sha384", 00:15:51.520 "dhgroup": "ffdhe2048" 00:15:51.520 } 00:15:51.520 } 00:15:51.520 ]' 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.520 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.521 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.521 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.521 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.779 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:51.779 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:52.347 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.347 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.347 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.347 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.347 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.347 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.347 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.347 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.607 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.866 00:15:52.866 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.866 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.866 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.866 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.866 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.866 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.866 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.866 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.866 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.866 { 00:15:52.866 "cntlid": 61, 00:15:52.866 "qid": 0, 00:15:52.866 "state": "enabled", 00:15:52.866 "thread": "nvmf_tgt_poll_group_000", 00:15:52.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:52.866 "listen_address": { 00:15:52.866 "trtype": "TCP", 00:15:52.866 "adrfam": "IPv4", 00:15:52.866 "traddr": "10.0.0.2", 00:15:52.866 "trsvcid": "4420" 00:15:52.866 }, 00:15:52.866 "peer_address": { 00:15:52.866 "trtype": "TCP", 00:15:52.866 "adrfam": "IPv4", 00:15:52.866 "traddr": "10.0.0.1", 00:15:52.866 "trsvcid": "48276" 00:15:52.866 }, 00:15:52.866 "auth": { 00:15:52.866 "state": "completed", 00:15:52.866 "digest": "sha384", 00:15:52.866 "dhgroup": "ffdhe2048" 00:15:52.866 } 00:15:52.866 } 00:15:52.866 ]' 00:15:52.866 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.126 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.126 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.126 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.126 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.126 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.126 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.126 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.488 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:53.488 09:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:53.806 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.806 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.806 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.806 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.806 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.806 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.806 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.806 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.065 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.323 00:15:54.323 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.323 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.323 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.581 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.581 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.581 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.581 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.581 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.581 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.581 { 00:15:54.581 "cntlid": 63, 00:15:54.581 "qid": 0, 00:15:54.581 "state": "enabled", 00:15:54.581 "thread": "nvmf_tgt_poll_group_000", 00:15:54.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.581 "listen_address": { 00:15:54.581 "trtype": "TCP", 00:15:54.581 "adrfam": "IPv4", 00:15:54.581 "traddr": "10.0.0.2", 00:15:54.581 "trsvcid": "4420" 00:15:54.581 }, 00:15:54.581 "peer_address": { 00:15:54.581 "trtype": "TCP", 00:15:54.581 "adrfam": "IPv4", 00:15:54.581 "traddr": "10.0.0.1", 00:15:54.581 "trsvcid": "48308" 00:15:54.581 }, 00:15:54.581 "auth": { 00:15:54.581 "state": "completed", 00:15:54.581 "digest": "sha384", 00:15:54.581 "dhgroup": "ffdhe2048" 00:15:54.581 } 00:15:54.581 } 00:15:54.581 ]' 00:15:54.581 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.581 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.582 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.582 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.582 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.582 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.582 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.582 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.840 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:54.840 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:15:55.408 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.408 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.408 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.408 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.408 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.408 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.408 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.408 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.408 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.667 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.926 00:15:55.926 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.926 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.926 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.184 { 00:15:56.184 "cntlid": 65, 00:15:56.184 "qid": 0, 00:15:56.184 "state": "enabled", 00:15:56.184 "thread": "nvmf_tgt_poll_group_000", 00:15:56.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.184 "listen_address": { 00:15:56.184 "trtype": "TCP", 00:15:56.184 "adrfam": "IPv4", 00:15:56.184 "traddr": "10.0.0.2", 00:15:56.184 "trsvcid": "4420" 00:15:56.184 }, 00:15:56.184 "peer_address": { 00:15:56.184 "trtype": "TCP", 00:15:56.184 "adrfam": "IPv4", 00:15:56.184 "traddr": "10.0.0.1", 00:15:56.184 "trsvcid": "48338" 00:15:56.184 }, 00:15:56.184 "auth": { 00:15:56.184 "state": "completed", 00:15:56.184 "digest": "sha384", 00:15:56.184 "dhgroup": "ffdhe3072" 00:15:56.184 } 00:15:56.184 } 00:15:56.184 ]' 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.184 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.443 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:56.443 09:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:15:57.011 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.011 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.011 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.011 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.011 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.011 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.011 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:57.011 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:57.270 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:57.270 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.270 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.271 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.530 00:15:57.530 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.530 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.530 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.789 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.789 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.789 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.789 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.789 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.789 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.789 { 00:15:57.789 "cntlid": 67, 00:15:57.789 "qid": 0, 00:15:57.789 "state": "enabled", 00:15:57.789 "thread": "nvmf_tgt_poll_group_000", 00:15:57.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.789 "listen_address": { 00:15:57.789 "trtype": "TCP", 00:15:57.789 "adrfam": "IPv4", 00:15:57.789 "traddr": "10.0.0.2", 00:15:57.789 "trsvcid": "4420" 00:15:57.789 }, 00:15:57.789 "peer_address": { 00:15:57.789 "trtype": "TCP", 00:15:57.789 "adrfam": "IPv4", 00:15:57.789 "traddr": "10.0.0.1", 00:15:57.789 "trsvcid": "48358" 00:15:57.789 }, 00:15:57.789 "auth": { 00:15:57.789 "state": "completed", 00:15:57.789 "digest": "sha384", 00:15:57.789 "dhgroup": "ffdhe3072" 00:15:57.789 } 00:15:57.789 } 00:15:57.789 ]' 00:15:57.789 09:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.789 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.789 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.789 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.789 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.789 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.789 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.789 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:58.048 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:15:58.615 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.615 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.615 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.615 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.615 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.615 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.615 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.615 09:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.874 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.134 00:15:59.134 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.134 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.134 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.395 { 00:15:59.395 "cntlid": 69, 00:15:59.395 "qid": 0, 00:15:59.395 "state": "enabled", 00:15:59.395 "thread": "nvmf_tgt_poll_group_000", 00:15:59.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:59.395 "listen_address": { 00:15:59.395 "trtype": "TCP", 00:15:59.395 "adrfam": "IPv4", 00:15:59.395 "traddr": "10.0.0.2", 00:15:59.395 "trsvcid": "4420" 00:15:59.395 }, 00:15:59.395 "peer_address": { 00:15:59.395 "trtype": "TCP", 00:15:59.395 "adrfam": "IPv4", 00:15:59.395 "traddr": "10.0.0.1", 00:15:59.395 "trsvcid": "48376" 00:15:59.395 }, 00:15:59.395 "auth": { 00:15:59.395 "state": "completed", 00:15:59.395 "digest": "sha384", 00:15:59.395 "dhgroup": "ffdhe3072" 00:15:59.395 } 00:15:59.395 } 00:15:59.395 ]' 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.395 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.654 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:15:59.654 09:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:00.221 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.221 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.221 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.221 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.221 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.221 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.221 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.221 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.480 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.739 00:16:00.739 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.739 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.739 09:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.998 { 00:16:00.998 "cntlid": 71, 00:16:00.998 "qid": 0, 00:16:00.998 "state": "enabled", 00:16:00.998 "thread": "nvmf_tgt_poll_group_000", 00:16:00.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.998 "listen_address": { 00:16:00.998 "trtype": "TCP", 00:16:00.998 "adrfam": "IPv4", 00:16:00.998 "traddr": "10.0.0.2", 00:16:00.998 "trsvcid": "4420" 00:16:00.998 }, 00:16:00.998 "peer_address": { 00:16:00.998 "trtype": "TCP", 00:16:00.998 "adrfam": "IPv4", 00:16:00.998 "traddr": "10.0.0.1", 00:16:00.998 "trsvcid": "60892" 00:16:00.998 }, 00:16:00.998 "auth": { 00:16:00.998 "state": "completed", 00:16:00.998 "digest": "sha384", 00:16:00.998 "dhgroup": "ffdhe3072" 00:16:00.998 } 00:16:00.998 } 00:16:00.998 ]' 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.998 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.257 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:01.257 09:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:01.824 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.824 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.824 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.824 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.824 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.824 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.824 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.824 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.824 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.083 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.342 00:16:02.342 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.342 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.342 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.601 { 00:16:02.601 "cntlid": 73, 00:16:02.601 "qid": 0, 00:16:02.601 "state": "enabled", 00:16:02.601 "thread": "nvmf_tgt_poll_group_000", 00:16:02.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.601 "listen_address": { 00:16:02.601 "trtype": "TCP", 00:16:02.601 "adrfam": "IPv4", 00:16:02.601 "traddr": "10.0.0.2", 00:16:02.601 "trsvcid": "4420" 00:16:02.601 }, 00:16:02.601 "peer_address": { 00:16:02.601 "trtype": "TCP", 00:16:02.601 "adrfam": "IPv4", 00:16:02.601 "traddr": "10.0.0.1", 00:16:02.601 "trsvcid": "60922" 00:16:02.601 }, 00:16:02.601 "auth": { 00:16:02.601 "state": "completed", 00:16:02.601 "digest": "sha384", 00:16:02.601 "dhgroup": "ffdhe4096" 00:16:02.601 } 00:16:02.601 } 00:16:02.601 ]' 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.601 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.860 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.860 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.860 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.860 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:02.860 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:03.427 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.427 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.427 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.427 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.427 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.427 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.427 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.427 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.685 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.686 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.945 00:16:03.945 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.945 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.945 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.203 { 00:16:04.203 "cntlid": 75, 00:16:04.203 "qid": 0, 00:16:04.203 "state": "enabled", 00:16:04.203 "thread": "nvmf_tgt_poll_group_000", 00:16:04.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.203 "listen_address": { 00:16:04.203 "trtype": "TCP", 00:16:04.203 "adrfam": "IPv4", 00:16:04.203 "traddr": "10.0.0.2", 00:16:04.203 "trsvcid": "4420" 00:16:04.203 }, 00:16:04.203 "peer_address": { 00:16:04.203 "trtype": "TCP", 00:16:04.203 "adrfam": "IPv4", 00:16:04.203 "traddr": "10.0.0.1", 00:16:04.203 "trsvcid": "60954" 00:16:04.203 }, 00:16:04.203 "auth": { 00:16:04.203 "state": "completed", 00:16:04.203 "digest": "sha384", 00:16:04.203 "dhgroup": "ffdhe4096" 00:16:04.203 } 00:16:04.203 } 00:16:04.203 ]' 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.203 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.462 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.462 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.462 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.462 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.462 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:04.462 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:05.029 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.288 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.289 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.547 00:16:05.547 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.547 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.547 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.806 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.806 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.806 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.806 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.806 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.806 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.806 { 00:16:05.806 "cntlid": 77, 00:16:05.806 "qid": 0, 00:16:05.806 "state": "enabled", 00:16:05.806 "thread": "nvmf_tgt_poll_group_000", 00:16:05.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.806 "listen_address": { 00:16:05.806 "trtype": "TCP", 00:16:05.806 "adrfam": "IPv4", 00:16:05.806 "traddr": "10.0.0.2", 00:16:05.806 "trsvcid": "4420" 00:16:05.806 }, 00:16:05.806 "peer_address": { 00:16:05.806 "trtype": "TCP", 00:16:05.806 "adrfam": "IPv4", 00:16:05.806 "traddr": "10.0.0.1", 00:16:05.806 "trsvcid": "60986" 00:16:05.806 }, 00:16:05.806 "auth": { 00:16:05.806 "state": "completed", 00:16:05.806 "digest": "sha384", 00:16:05.806 "dhgroup": "ffdhe4096" 00:16:05.806 } 00:16:05.806 } 00:16:05.806 ]' 00:16:05.806 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.806 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.806 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.063 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:06.063 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.063 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.063 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.063 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.322 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:06.322 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:06.889 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.889 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.889 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.889 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.889 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.148 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.148 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.148 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.148 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.407 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.407 { 00:16:07.407 "cntlid": 79, 00:16:07.407 "qid": 0, 00:16:07.407 "state": "enabled", 00:16:07.407 "thread": "nvmf_tgt_poll_group_000", 00:16:07.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.407 "listen_address": { 00:16:07.407 "trtype": "TCP", 00:16:07.407 "adrfam": "IPv4", 00:16:07.407 "traddr": "10.0.0.2", 00:16:07.407 "trsvcid": "4420" 00:16:07.407 }, 00:16:07.407 "peer_address": { 00:16:07.407 "trtype": "TCP", 00:16:07.407 "adrfam": "IPv4", 00:16:07.407 "traddr": "10.0.0.1", 00:16:07.407 "trsvcid": "32790" 00:16:07.407 }, 00:16:07.407 "auth": { 00:16:07.407 "state": "completed", 00:16:07.407 "digest": "sha384", 00:16:07.407 "dhgroup": "ffdhe4096" 00:16:07.407 } 00:16:07.407 } 00:16:07.407 ]' 00:16:07.407 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.666 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.666 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.666 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.666 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.666 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.666 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.666 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.925 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:07.925 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:08.492 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.492 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.492 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.492 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.492 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.492 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.492 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.492 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.492 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.750 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.751 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.751 09:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.009 00:16:09.009 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.009 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.009 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.267 { 00:16:09.267 "cntlid": 81, 00:16:09.267 "qid": 0, 00:16:09.267 "state": "enabled", 00:16:09.267 "thread": "nvmf_tgt_poll_group_000", 00:16:09.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.267 "listen_address": { 00:16:09.267 "trtype": "TCP", 00:16:09.267 "adrfam": "IPv4", 00:16:09.267 "traddr": "10.0.0.2", 00:16:09.267 "trsvcid": "4420" 00:16:09.267 }, 00:16:09.267 "peer_address": { 00:16:09.267 "trtype": "TCP", 00:16:09.267 "adrfam": "IPv4", 00:16:09.267 "traddr": "10.0.0.1", 00:16:09.267 "trsvcid": "32816" 00:16:09.267 }, 00:16:09.267 "auth": { 00:16:09.267 "state": "completed", 00:16:09.267 "digest": "sha384", 00:16:09.267 "dhgroup": "ffdhe6144" 00:16:09.267 } 00:16:09.267 } 00:16:09.267 ]' 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.267 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.525 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:09.525 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:10.093 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.093 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.093 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.093 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.093 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.093 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.093 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.093 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.352 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.610 00:16:10.610 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.610 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.610 09:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.869 { 00:16:10.869 "cntlid": 83, 00:16:10.869 "qid": 0, 00:16:10.869 "state": "enabled", 00:16:10.869 "thread": "nvmf_tgt_poll_group_000", 00:16:10.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.869 "listen_address": { 00:16:10.869 "trtype": "TCP", 00:16:10.869 "adrfam": "IPv4", 00:16:10.869 "traddr": "10.0.0.2", 00:16:10.869 "trsvcid": "4420" 00:16:10.869 }, 00:16:10.869 "peer_address": { 00:16:10.869 "trtype": "TCP", 00:16:10.869 "adrfam": "IPv4", 00:16:10.869 "traddr": "10.0.0.1", 00:16:10.869 "trsvcid": "32852" 00:16:10.869 }, 00:16:10.869 "auth": { 00:16:10.869 "state": "completed", 00:16:10.869 "digest": "sha384", 00:16:10.869 "dhgroup": "ffdhe6144" 00:16:10.869 } 00:16:10.869 } 00:16:10.869 ]' 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.869 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.128 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.128 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.128 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.128 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:11.128 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:11.695 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.695 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.695 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.695 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.695 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.695 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.695 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.695 09:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.954 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.213 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.472 { 00:16:12.472 "cntlid": 85, 00:16:12.472 "qid": 0, 00:16:12.472 "state": "enabled", 00:16:12.472 "thread": "nvmf_tgt_poll_group_000", 00:16:12.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.472 "listen_address": { 00:16:12.472 "trtype": "TCP", 00:16:12.472 "adrfam": "IPv4", 00:16:12.472 "traddr": "10.0.0.2", 00:16:12.472 "trsvcid": "4420" 00:16:12.472 }, 00:16:12.472 "peer_address": { 00:16:12.472 "trtype": "TCP", 00:16:12.472 "adrfam": "IPv4", 00:16:12.472 "traddr": "10.0.0.1", 00:16:12.472 "trsvcid": "32882" 00:16:12.472 }, 00:16:12.472 "auth": { 00:16:12.472 "state": "completed", 00:16:12.472 "digest": "sha384", 00:16:12.472 "dhgroup": "ffdhe6144" 00:16:12.472 } 00:16:12.472 } 00:16:12.472 ]' 00:16:12.472 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.730 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.731 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.731 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.731 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.731 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.731 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.731 09:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.989 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:12.989 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.557 09:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.125 00:16:14.125 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.125 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.125 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.382 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.382 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.382 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.382 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.382 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.382 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.382 { 00:16:14.382 "cntlid": 87, 00:16:14.382 "qid": 0, 00:16:14.383 "state": "enabled", 00:16:14.383 "thread": "nvmf_tgt_poll_group_000", 00:16:14.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.383 "listen_address": { 00:16:14.383 "trtype": "TCP", 00:16:14.383 "adrfam": "IPv4", 00:16:14.383 "traddr": "10.0.0.2", 00:16:14.383 "trsvcid": "4420" 00:16:14.383 }, 00:16:14.383 "peer_address": { 00:16:14.383 "trtype": "TCP", 00:16:14.383 "adrfam": "IPv4", 00:16:14.383 "traddr": "10.0.0.1", 00:16:14.383 "trsvcid": "32906" 00:16:14.383 }, 00:16:14.383 "auth": { 00:16:14.383 "state": "completed", 00:16:14.383 "digest": "sha384", 00:16:14.383 "dhgroup": "ffdhe6144" 00:16:14.383 } 00:16:14.383 } 00:16:14.383 ]' 00:16:14.383 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.383 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.383 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.383 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.383 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.383 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.383 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.383 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.641 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:14.641 09:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:15.208 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.208 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.208 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.208 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.208 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.208 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.208 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.208 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.208 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.468 09:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.036 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.036 { 00:16:16.036 "cntlid": 89, 00:16:16.036 "qid": 0, 00:16:16.036 "state": "enabled", 00:16:16.036 "thread": "nvmf_tgt_poll_group_000", 00:16:16.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:16.036 "listen_address": { 00:16:16.036 "trtype": "TCP", 00:16:16.036 "adrfam": "IPv4", 00:16:16.036 "traddr": "10.0.0.2", 00:16:16.036 "trsvcid": "4420" 00:16:16.036 }, 00:16:16.036 "peer_address": { 00:16:16.036 "trtype": "TCP", 00:16:16.036 "adrfam": "IPv4", 00:16:16.036 "traddr": "10.0.0.1", 00:16:16.036 "trsvcid": "32920" 00:16:16.036 }, 00:16:16.036 "auth": { 00:16:16.036 "state": "completed", 00:16:16.036 "digest": "sha384", 00:16:16.036 "dhgroup": "ffdhe8192" 00:16:16.036 } 00:16:16.036 } 00:16:16.036 ]' 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.036 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.294 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:16.294 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.294 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.295 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.295 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.553 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:16.553 09:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.121 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.380 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.639 00:16:17.639 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.639 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.639 09:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.897 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.897 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.897 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.897 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.897 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.897 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.897 { 00:16:17.897 "cntlid": 91, 00:16:17.897 "qid": 0, 00:16:17.897 "state": "enabled", 00:16:17.897 "thread": "nvmf_tgt_poll_group_000", 00:16:17.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.897 "listen_address": { 00:16:17.897 "trtype": "TCP", 00:16:17.897 "adrfam": "IPv4", 00:16:17.897 "traddr": "10.0.0.2", 00:16:17.897 "trsvcid": "4420" 00:16:17.897 }, 00:16:17.897 "peer_address": { 00:16:17.897 "trtype": "TCP", 00:16:17.897 "adrfam": "IPv4", 00:16:17.897 "traddr": "10.0.0.1", 00:16:17.897 "trsvcid": "32952" 00:16:17.897 }, 00:16:17.897 "auth": { 00:16:17.897 "state": "completed", 00:16:17.897 "digest": "sha384", 00:16:17.897 "dhgroup": "ffdhe8192" 00:16:17.897 } 00:16:17.897 } 00:16:17.897 ]' 00:16:17.897 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.157 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.157 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.157 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.157 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.157 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.157 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.157 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.416 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:18.416 09:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.985 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.244 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.244 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.244 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.244 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.502 00:16:19.502 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.502 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.502 09:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.762 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.762 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.762 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.762 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.762 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.762 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.762 { 00:16:19.762 "cntlid": 93, 00:16:19.762 "qid": 0, 00:16:19.762 "state": "enabled", 00:16:19.762 "thread": "nvmf_tgt_poll_group_000", 00:16:19.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.762 "listen_address": { 00:16:19.762 "trtype": "TCP", 00:16:19.762 "adrfam": "IPv4", 00:16:19.762 "traddr": "10.0.0.2", 00:16:19.762 "trsvcid": "4420" 00:16:19.762 }, 00:16:19.762 "peer_address": { 00:16:19.762 "trtype": "TCP", 00:16:19.762 "adrfam": "IPv4", 00:16:19.762 "traddr": "10.0.0.1", 00:16:19.762 "trsvcid": "32966" 00:16:19.762 }, 00:16:19.762 "auth": { 00:16:19.762 "state": "completed", 00:16:19.762 "digest": "sha384", 00:16:19.762 "dhgroup": "ffdhe8192" 00:16:19.762 } 00:16:19.762 } 00:16:19.762 ]' 00:16:19.762 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.762 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.762 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.021 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.021 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.021 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.021 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.021 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.021 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:20.022 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:20.590 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.590 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.590 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.590 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.849 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.849 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.849 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.849 09:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.849 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:20.849 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.849 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.849 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.849 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.849 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.850 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.850 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.850 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.850 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.850 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.850 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.850 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.417 00:16:21.417 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.417 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.417 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.676 { 00:16:21.676 "cntlid": 95, 00:16:21.676 "qid": 0, 00:16:21.676 "state": "enabled", 00:16:21.676 "thread": "nvmf_tgt_poll_group_000", 00:16:21.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.676 "listen_address": { 00:16:21.676 "trtype": "TCP", 00:16:21.676 "adrfam": "IPv4", 00:16:21.676 "traddr": "10.0.0.2", 00:16:21.676 "trsvcid": "4420" 00:16:21.676 }, 00:16:21.676 "peer_address": { 00:16:21.676 "trtype": "TCP", 00:16:21.676 "adrfam": "IPv4", 00:16:21.676 "traddr": "10.0.0.1", 00:16:21.676 "trsvcid": "60554" 00:16:21.676 }, 00:16:21.676 "auth": { 00:16:21.676 "state": "completed", 00:16:21.676 "digest": "sha384", 00:16:21.676 "dhgroup": "ffdhe8192" 00:16:21.676 } 00:16:21.676 } 00:16:21.676 ]' 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.676 09:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.934 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:21.935 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.502 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.761 09:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.020 00:16:23.020 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.020 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.020 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.279 { 00:16:23.279 "cntlid": 97, 00:16:23.279 "qid": 0, 00:16:23.279 "state": "enabled", 00:16:23.279 "thread": "nvmf_tgt_poll_group_000", 00:16:23.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.279 "listen_address": { 00:16:23.279 "trtype": "TCP", 00:16:23.279 "adrfam": "IPv4", 00:16:23.279 "traddr": "10.0.0.2", 00:16:23.279 "trsvcid": "4420" 00:16:23.279 }, 00:16:23.279 "peer_address": { 00:16:23.279 "trtype": "TCP", 00:16:23.279 "adrfam": "IPv4", 00:16:23.279 "traddr": "10.0.0.1", 00:16:23.279 "trsvcid": "60576" 00:16:23.279 }, 00:16:23.279 "auth": { 00:16:23.279 "state": "completed", 00:16:23.279 "digest": "sha512", 00:16:23.279 "dhgroup": "null" 00:16:23.279 } 00:16:23.279 } 00:16:23.279 ]' 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.279 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.538 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:23.538 09:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:24.106 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.106 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.106 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.106 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.106 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.106 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.106 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.106 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.366 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.624 00:16:24.624 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.624 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.624 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.883 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.883 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.883 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.883 09:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.883 { 00:16:24.883 "cntlid": 99, 00:16:24.883 "qid": 0, 00:16:24.883 "state": "enabled", 00:16:24.883 "thread": "nvmf_tgt_poll_group_000", 00:16:24.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.883 "listen_address": { 00:16:24.883 "trtype": "TCP", 00:16:24.883 "adrfam": "IPv4", 00:16:24.883 "traddr": "10.0.0.2", 00:16:24.883 "trsvcid": "4420" 00:16:24.883 }, 00:16:24.883 "peer_address": { 00:16:24.883 "trtype": "TCP", 00:16:24.883 "adrfam": "IPv4", 00:16:24.883 "traddr": "10.0.0.1", 00:16:24.883 "trsvcid": "60610" 00:16:24.883 }, 00:16:24.883 "auth": { 00:16:24.883 "state": "completed", 00:16:24.883 "digest": "sha512", 00:16:24.883 "dhgroup": "null" 00:16:24.883 } 00:16:24.883 } 00:16:24.883 ]' 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.883 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.142 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:25.142 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:25.708 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.708 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.708 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.708 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.708 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.708 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.708 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.708 09:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.967 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.226 00:16:26.226 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.226 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.226 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.485 { 00:16:26.485 "cntlid": 101, 00:16:26.485 "qid": 0, 00:16:26.485 "state": "enabled", 00:16:26.485 "thread": "nvmf_tgt_poll_group_000", 00:16:26.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.485 "listen_address": { 00:16:26.485 "trtype": "TCP", 00:16:26.485 "adrfam": "IPv4", 00:16:26.485 "traddr": "10.0.0.2", 00:16:26.485 "trsvcid": "4420" 00:16:26.485 }, 00:16:26.485 "peer_address": { 00:16:26.485 "trtype": "TCP", 00:16:26.485 "adrfam": "IPv4", 00:16:26.485 "traddr": "10.0.0.1", 00:16:26.485 "trsvcid": "60640" 00:16:26.485 }, 00:16:26.485 "auth": { 00:16:26.485 "state": "completed", 00:16:26.485 "digest": "sha512", 00:16:26.485 "dhgroup": "null" 00:16:26.485 } 00:16:26.485 } 00:16:26.485 ]' 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.485 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.744 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:26.744 09:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:27.311 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.311 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.311 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.311 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.311 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.311 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.311 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.311 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.570 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.829 00:16:27.829 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.829 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.829 09:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.088 { 00:16:28.088 "cntlid": 103, 00:16:28.088 "qid": 0, 00:16:28.088 "state": "enabled", 00:16:28.088 "thread": "nvmf_tgt_poll_group_000", 00:16:28.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.088 "listen_address": { 00:16:28.088 "trtype": "TCP", 00:16:28.088 "adrfam": "IPv4", 00:16:28.088 "traddr": "10.0.0.2", 00:16:28.088 "trsvcid": "4420" 00:16:28.088 }, 00:16:28.088 "peer_address": { 00:16:28.088 "trtype": "TCP", 00:16:28.088 "adrfam": "IPv4", 00:16:28.088 "traddr": "10.0.0.1", 00:16:28.088 "trsvcid": "60684" 00:16:28.088 }, 00:16:28.088 "auth": { 00:16:28.088 "state": "completed", 00:16:28.088 "digest": "sha512", 00:16:28.088 "dhgroup": "null" 00:16:28.088 } 00:16:28.088 } 00:16:28.088 ]' 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.088 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.347 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:28.347 09:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:28.915 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.915 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.915 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.915 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.915 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.915 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.915 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.915 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.915 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.174 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.433 00:16:29.434 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.434 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.434 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.692 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.692 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.692 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.692 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.692 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.692 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.692 { 00:16:29.692 "cntlid": 105, 00:16:29.692 "qid": 0, 00:16:29.692 "state": "enabled", 00:16:29.692 "thread": "nvmf_tgt_poll_group_000", 00:16:29.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.692 "listen_address": { 00:16:29.692 "trtype": "TCP", 00:16:29.692 "adrfam": "IPv4", 00:16:29.692 "traddr": "10.0.0.2", 00:16:29.692 "trsvcid": "4420" 00:16:29.692 }, 00:16:29.692 "peer_address": { 00:16:29.692 "trtype": "TCP", 00:16:29.692 "adrfam": "IPv4", 00:16:29.692 "traddr": "10.0.0.1", 00:16:29.692 "trsvcid": "60716" 00:16:29.692 }, 00:16:29.692 "auth": { 00:16:29.692 "state": "completed", 00:16:29.692 "digest": "sha512", 00:16:29.692 "dhgroup": "ffdhe2048" 00:16:29.692 } 00:16:29.692 } 00:16:29.692 ]' 00:16:29.693 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.693 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.693 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.693 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.693 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.693 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.693 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.693 09:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.952 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:29.952 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:30.522 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.522 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.522 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.522 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.522 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.522 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.522 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:30.522 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.838 09:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.124 00:16:31.124 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.124 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.124 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.124 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.124 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.124 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.124 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.124 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.124 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.124 { 00:16:31.124 "cntlid": 107, 00:16:31.124 "qid": 0, 00:16:31.124 "state": "enabled", 00:16:31.124 "thread": "nvmf_tgt_poll_group_000", 00:16:31.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.124 "listen_address": { 00:16:31.124 "trtype": "TCP", 00:16:31.124 "adrfam": "IPv4", 00:16:31.124 "traddr": "10.0.0.2", 00:16:31.124 "trsvcid": "4420" 00:16:31.124 }, 00:16:31.124 "peer_address": { 00:16:31.124 "trtype": "TCP", 00:16:31.124 "adrfam": "IPv4", 00:16:31.124 "traddr": "10.0.0.1", 00:16:31.124 "trsvcid": "54738" 00:16:31.124 }, 00:16:31.124 "auth": { 00:16:31.124 "state": "completed", 00:16:31.124 "digest": "sha512", 00:16:31.124 "dhgroup": "ffdhe2048" 00:16:31.124 } 00:16:31.124 } 00:16:31.124 ]' 00:16:31.414 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.414 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.414 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.414 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.414 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.414 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.414 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.414 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.673 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:31.673 09:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.241 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.499 00:16:32.499 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.499 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.499 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.759 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.759 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.759 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.759 09:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.759 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.759 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.759 { 00:16:32.759 "cntlid": 109, 00:16:32.759 "qid": 0, 00:16:32.759 "state": "enabled", 00:16:32.759 "thread": "nvmf_tgt_poll_group_000", 00:16:32.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.759 "listen_address": { 00:16:32.759 "trtype": "TCP", 00:16:32.759 "adrfam": "IPv4", 00:16:32.759 "traddr": "10.0.0.2", 00:16:32.759 "trsvcid": "4420" 00:16:32.759 }, 00:16:32.759 "peer_address": { 00:16:32.759 "trtype": "TCP", 00:16:32.759 "adrfam": "IPv4", 00:16:32.759 "traddr": "10.0.0.1", 00:16:32.759 "trsvcid": "54766" 00:16:32.759 }, 00:16:32.759 "auth": { 00:16:32.759 "state": "completed", 00:16:32.759 "digest": "sha512", 00:16:32.759 "dhgroup": "ffdhe2048" 00:16:32.759 } 00:16:32.759 } 00:16:32.759 ]' 00:16:32.759 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.759 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.759 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.759 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.759 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.018 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.018 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.018 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.018 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:33.018 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:33.586 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.586 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.586 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.586 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.845 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.845 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.845 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.845 09:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.845 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.103 00:16:34.103 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.103 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.103 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.362 { 00:16:34.362 "cntlid": 111, 00:16:34.362 "qid": 0, 00:16:34.362 "state": "enabled", 00:16:34.362 "thread": "nvmf_tgt_poll_group_000", 00:16:34.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.362 "listen_address": { 00:16:34.362 "trtype": "TCP", 00:16:34.362 "adrfam": "IPv4", 00:16:34.362 "traddr": "10.0.0.2", 00:16:34.362 "trsvcid": "4420" 00:16:34.362 }, 00:16:34.362 "peer_address": { 00:16:34.362 "trtype": "TCP", 00:16:34.362 "adrfam": "IPv4", 00:16:34.362 "traddr": "10.0.0.1", 00:16:34.362 "trsvcid": "54808" 00:16:34.362 }, 00:16:34.362 "auth": { 00:16:34.362 "state": "completed", 00:16:34.362 "digest": "sha512", 00:16:34.362 "dhgroup": "ffdhe2048" 00:16:34.362 } 00:16:34.362 } 00:16:34.362 ]' 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.362 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.621 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.621 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.621 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.621 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:34.621 09:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:35.189 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.189 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.189 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.189 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.189 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.189 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.189 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.189 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:35.189 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.448 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.706 00:16:35.706 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.706 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.706 09:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.964 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.964 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.964 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.964 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.964 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.964 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.964 { 00:16:35.964 "cntlid": 113, 00:16:35.964 "qid": 0, 00:16:35.964 "state": "enabled", 00:16:35.964 "thread": "nvmf_tgt_poll_group_000", 00:16:35.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.964 "listen_address": { 00:16:35.964 "trtype": "TCP", 00:16:35.964 "adrfam": "IPv4", 00:16:35.965 "traddr": "10.0.0.2", 00:16:35.965 "trsvcid": "4420" 00:16:35.965 }, 00:16:35.965 "peer_address": { 00:16:35.965 "trtype": "TCP", 00:16:35.965 "adrfam": "IPv4", 00:16:35.965 "traddr": "10.0.0.1", 00:16:35.965 "trsvcid": "54832" 00:16:35.965 }, 00:16:35.965 "auth": { 00:16:35.965 "state": "completed", 00:16:35.965 "digest": "sha512", 00:16:35.965 "dhgroup": "ffdhe3072" 00:16:35.965 } 00:16:35.965 } 00:16:35.965 ]' 00:16:35.965 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.965 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.965 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.965 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.965 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.223 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.223 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.223 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.223 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:36.223 09:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:36.790 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.790 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.790 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.790 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.790 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.790 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.790 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.790 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.048 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.306 00:16:37.306 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.306 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.306 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.564 { 00:16:37.564 "cntlid": 115, 00:16:37.564 "qid": 0, 00:16:37.564 "state": "enabled", 00:16:37.564 "thread": "nvmf_tgt_poll_group_000", 00:16:37.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.564 "listen_address": { 00:16:37.564 "trtype": "TCP", 00:16:37.564 "adrfam": "IPv4", 00:16:37.564 "traddr": "10.0.0.2", 00:16:37.564 "trsvcid": "4420" 00:16:37.564 }, 00:16:37.564 "peer_address": { 00:16:37.564 "trtype": "TCP", 00:16:37.564 "adrfam": "IPv4", 00:16:37.564 "traddr": "10.0.0.1", 00:16:37.564 "trsvcid": "54868" 00:16:37.564 }, 00:16:37.564 "auth": { 00:16:37.564 "state": "completed", 00:16:37.564 "digest": "sha512", 00:16:37.564 "dhgroup": "ffdhe3072" 00:16:37.564 } 00:16:37.564 } 00:16:37.564 ]' 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.564 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.823 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.823 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.823 09:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.823 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:37.824 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:38.391 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.391 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.391 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.391 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.391 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.391 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.391 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.650 09:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.909 00:16:38.909 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.909 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.909 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.168 { 00:16:39.168 "cntlid": 117, 00:16:39.168 "qid": 0, 00:16:39.168 "state": "enabled", 00:16:39.168 "thread": "nvmf_tgt_poll_group_000", 00:16:39.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.168 "listen_address": { 00:16:39.168 "trtype": "TCP", 00:16:39.168 "adrfam": "IPv4", 00:16:39.168 "traddr": "10.0.0.2", 00:16:39.168 "trsvcid": "4420" 00:16:39.168 }, 00:16:39.168 "peer_address": { 00:16:39.168 "trtype": "TCP", 00:16:39.168 "adrfam": "IPv4", 00:16:39.168 "traddr": "10.0.0.1", 00:16:39.168 "trsvcid": "54892" 00:16:39.168 }, 00:16:39.168 "auth": { 00:16:39.168 "state": "completed", 00:16:39.168 "digest": "sha512", 00:16:39.168 "dhgroup": "ffdhe3072" 00:16:39.168 } 00:16:39.168 } 00:16:39.168 ]' 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.168 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.427 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.427 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.427 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.427 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:39.427 09:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:39.994 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.994 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.994 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.994 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.994 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.994 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.994 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.995 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.253 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.512 00:16:40.512 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.512 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.512 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.771 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.771 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.771 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.771 09:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.771 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.771 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.771 { 00:16:40.771 "cntlid": 119, 00:16:40.771 "qid": 0, 00:16:40.771 "state": "enabled", 00:16:40.771 "thread": "nvmf_tgt_poll_group_000", 00:16:40.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.771 "listen_address": { 00:16:40.771 "trtype": "TCP", 00:16:40.771 "adrfam": "IPv4", 00:16:40.771 "traddr": "10.0.0.2", 00:16:40.771 "trsvcid": "4420" 00:16:40.771 }, 00:16:40.771 "peer_address": { 00:16:40.771 "trtype": "TCP", 00:16:40.771 "adrfam": "IPv4", 00:16:40.771 "traddr": "10.0.0.1", 00:16:40.771 "trsvcid": "33982" 00:16:40.771 }, 00:16:40.771 "auth": { 00:16:40.771 "state": "completed", 00:16:40.771 "digest": "sha512", 00:16:40.771 "dhgroup": "ffdhe3072" 00:16:40.771 } 00:16:40.771 } 00:16:40.771 ]' 00:16:40.771 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.771 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.771 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.771 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.771 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.030 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.030 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.030 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.030 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:41.030 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:41.598 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.598 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.598 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.598 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.598 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.598 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.598 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.598 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.598 09:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.858 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.117 00:16:42.117 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.117 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.117 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.376 { 00:16:42.376 "cntlid": 121, 00:16:42.376 "qid": 0, 00:16:42.376 "state": "enabled", 00:16:42.376 "thread": "nvmf_tgt_poll_group_000", 00:16:42.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.376 "listen_address": { 00:16:42.376 "trtype": "TCP", 00:16:42.376 "adrfam": "IPv4", 00:16:42.376 "traddr": "10.0.0.2", 00:16:42.376 "trsvcid": "4420" 00:16:42.376 }, 00:16:42.376 "peer_address": { 00:16:42.376 "trtype": "TCP", 00:16:42.376 "adrfam": "IPv4", 00:16:42.376 "traddr": "10.0.0.1", 00:16:42.376 "trsvcid": "33998" 00:16:42.376 }, 00:16:42.376 "auth": { 00:16:42.376 "state": "completed", 00:16:42.376 "digest": "sha512", 00:16:42.376 "dhgroup": "ffdhe4096" 00:16:42.376 } 00:16:42.376 } 00:16:42.376 ]' 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.376 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.377 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.636 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.636 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.636 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.636 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:42.636 09:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:43.204 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.204 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.204 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.204 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.462 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.463 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.463 09:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.722 00:16:43.722 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.722 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.722 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.981 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.981 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.981 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.981 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.981 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.981 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.981 { 00:16:43.981 "cntlid": 123, 00:16:43.981 "qid": 0, 00:16:43.981 "state": "enabled", 00:16:43.981 "thread": "nvmf_tgt_poll_group_000", 00:16:43.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.981 "listen_address": { 00:16:43.981 "trtype": "TCP", 00:16:43.981 "adrfam": "IPv4", 00:16:43.981 "traddr": "10.0.0.2", 00:16:43.981 "trsvcid": "4420" 00:16:43.981 }, 00:16:43.981 "peer_address": { 00:16:43.981 "trtype": "TCP", 00:16:43.981 "adrfam": "IPv4", 00:16:43.981 "traddr": "10.0.0.1", 00:16:43.981 "trsvcid": "34030" 00:16:43.981 }, 00:16:43.981 "auth": { 00:16:43.981 "state": "completed", 00:16:43.981 "digest": "sha512", 00:16:43.981 "dhgroup": "ffdhe4096" 00:16:43.981 } 00:16:43.981 } 00:16:43.981 ]' 00:16:43.981 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.981 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.981 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.240 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.240 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.240 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.240 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.240 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.499 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:44.499 09:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.067 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.326 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.585 { 00:16:45.585 "cntlid": 125, 00:16:45.585 "qid": 0, 00:16:45.585 "state": "enabled", 00:16:45.585 "thread": "nvmf_tgt_poll_group_000", 00:16:45.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.585 "listen_address": { 00:16:45.585 "trtype": "TCP", 00:16:45.585 "adrfam": "IPv4", 00:16:45.585 "traddr": "10.0.0.2", 00:16:45.585 "trsvcid": "4420" 00:16:45.585 }, 00:16:45.585 "peer_address": { 00:16:45.585 "trtype": "TCP", 00:16:45.585 "adrfam": "IPv4", 00:16:45.585 "traddr": "10.0.0.1", 00:16:45.585 "trsvcid": "34056" 00:16:45.585 }, 00:16:45.585 "auth": { 00:16:45.585 "state": "completed", 00:16:45.585 "digest": "sha512", 00:16:45.585 "dhgroup": "ffdhe4096" 00:16:45.585 } 00:16:45.585 } 00:16:45.585 ]' 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.585 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.844 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.844 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.844 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.844 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.844 09:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.103 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:46.103 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.671 09:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.930 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.189 { 00:16:47.189 "cntlid": 127, 00:16:47.189 "qid": 0, 00:16:47.189 "state": "enabled", 00:16:47.189 "thread": "nvmf_tgt_poll_group_000", 00:16:47.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.189 "listen_address": { 00:16:47.189 "trtype": "TCP", 00:16:47.189 "adrfam": "IPv4", 00:16:47.189 "traddr": "10.0.0.2", 00:16:47.189 "trsvcid": "4420" 00:16:47.189 }, 00:16:47.189 "peer_address": { 00:16:47.189 "trtype": "TCP", 00:16:47.189 "adrfam": "IPv4", 00:16:47.189 "traddr": "10.0.0.1", 00:16:47.189 "trsvcid": "34088" 00:16:47.189 }, 00:16:47.189 "auth": { 00:16:47.189 "state": "completed", 00:16:47.189 "digest": "sha512", 00:16:47.189 "dhgroup": "ffdhe4096" 00:16:47.189 } 00:16:47.189 } 00:16:47.189 ]' 00:16:47.189 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.448 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.448 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.448 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.448 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.448 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.448 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.448 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.707 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:47.707 09:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.276 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.845 00:16:48.845 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.845 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.845 09:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.845 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.845 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.845 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.845 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.845 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.845 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.845 { 00:16:48.845 "cntlid": 129, 00:16:48.845 "qid": 0, 00:16:48.845 "state": "enabled", 00:16:48.845 "thread": "nvmf_tgt_poll_group_000", 00:16:48.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.845 "listen_address": { 00:16:48.845 "trtype": "TCP", 00:16:48.845 "adrfam": "IPv4", 00:16:48.845 "traddr": "10.0.0.2", 00:16:48.845 "trsvcid": "4420" 00:16:48.845 }, 00:16:48.845 "peer_address": { 00:16:48.845 "trtype": "TCP", 00:16:48.845 "adrfam": "IPv4", 00:16:48.845 "traddr": "10.0.0.1", 00:16:48.845 "trsvcid": "34124" 00:16:48.845 }, 00:16:48.845 "auth": { 00:16:48.845 "state": "completed", 00:16:48.845 "digest": "sha512", 00:16:48.845 "dhgroup": "ffdhe6144" 00:16:48.845 } 00:16:48.845 } 00:16:48.845 ]' 00:16:48.845 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.104 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.104 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.104 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.104 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.104 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.104 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.104 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.363 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:49.363 09:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:49.931 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.931 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.931 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.931 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.931 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.931 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.931 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.931 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.189 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:50.189 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.190 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.448 00:16:50.448 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.448 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.448 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.707 { 00:16:50.707 "cntlid": 131, 00:16:50.707 "qid": 0, 00:16:50.707 "state": "enabled", 00:16:50.707 "thread": "nvmf_tgt_poll_group_000", 00:16:50.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.707 "listen_address": { 00:16:50.707 "trtype": "TCP", 00:16:50.707 "adrfam": "IPv4", 00:16:50.707 "traddr": "10.0.0.2", 00:16:50.707 "trsvcid": "4420" 00:16:50.707 }, 00:16:50.707 "peer_address": { 00:16:50.707 "trtype": "TCP", 00:16:50.707 "adrfam": "IPv4", 00:16:50.707 "traddr": "10.0.0.1", 00:16:50.707 "trsvcid": "34148" 00:16:50.707 }, 00:16:50.707 "auth": { 00:16:50.707 "state": "completed", 00:16:50.707 "digest": "sha512", 00:16:50.707 "dhgroup": "ffdhe6144" 00:16:50.707 } 00:16:50.707 } 00:16:50.707 ]' 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.707 09:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.965 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:50.965 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:51.531 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.531 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.531 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.531 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.531 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.531 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.531 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.531 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.840 09:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.098 00:16:52.098 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.098 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.098 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.357 { 00:16:52.357 "cntlid": 133, 00:16:52.357 "qid": 0, 00:16:52.357 "state": "enabled", 00:16:52.357 "thread": "nvmf_tgt_poll_group_000", 00:16:52.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.357 "listen_address": { 00:16:52.357 "trtype": "TCP", 00:16:52.357 "adrfam": "IPv4", 00:16:52.357 "traddr": "10.0.0.2", 00:16:52.357 "trsvcid": "4420" 00:16:52.357 }, 00:16:52.357 "peer_address": { 00:16:52.357 "trtype": "TCP", 00:16:52.357 "adrfam": "IPv4", 00:16:52.357 "traddr": "10.0.0.1", 00:16:52.357 "trsvcid": "60948" 00:16:52.357 }, 00:16:52.357 "auth": { 00:16:52.357 "state": "completed", 00:16:52.357 "digest": "sha512", 00:16:52.357 "dhgroup": "ffdhe6144" 00:16:52.357 } 00:16:52.357 } 00:16:52.357 ]' 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.357 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.616 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:52.616 09:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:53.182 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.182 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.182 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.182 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.182 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.182 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.182 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.182 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.442 09:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.701 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.960 { 00:16:53.960 "cntlid": 135, 00:16:53.960 "qid": 0, 00:16:53.960 "state": "enabled", 00:16:53.960 "thread": "nvmf_tgt_poll_group_000", 00:16:53.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.960 "listen_address": { 00:16:53.960 "trtype": "TCP", 00:16:53.960 "adrfam": "IPv4", 00:16:53.960 "traddr": "10.0.0.2", 00:16:53.960 "trsvcid": "4420" 00:16:53.960 }, 00:16:53.960 "peer_address": { 00:16:53.960 "trtype": "TCP", 00:16:53.960 "adrfam": "IPv4", 00:16:53.960 "traddr": "10.0.0.1", 00:16:53.960 "trsvcid": "60964" 00:16:53.960 }, 00:16:53.960 "auth": { 00:16:53.960 "state": "completed", 00:16:53.960 "digest": "sha512", 00:16:53.960 "dhgroup": "ffdhe6144" 00:16:53.960 } 00:16:53.960 } 00:16:53.960 ]' 00:16:53.960 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.218 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.218 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.218 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.218 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.218 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.218 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.218 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.476 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:54.476 09:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:16:55.044 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.044 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.044 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.044 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.044 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.044 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.044 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.044 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.045 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.303 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.303 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.303 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.303 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.562 00:16:55.562 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.562 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.562 09:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.822 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.822 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.822 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.822 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.822 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.822 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.822 { 00:16:55.822 "cntlid": 137, 00:16:55.822 "qid": 0, 00:16:55.822 "state": "enabled", 00:16:55.822 "thread": "nvmf_tgt_poll_group_000", 00:16:55.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.822 "listen_address": { 00:16:55.822 "trtype": "TCP", 00:16:55.822 "adrfam": "IPv4", 00:16:55.822 "traddr": "10.0.0.2", 00:16:55.822 "trsvcid": "4420" 00:16:55.822 }, 00:16:55.822 "peer_address": { 00:16:55.822 "trtype": "TCP", 00:16:55.822 "adrfam": "IPv4", 00:16:55.822 "traddr": "10.0.0.1", 00:16:55.822 "trsvcid": "60990" 00:16:55.822 }, 00:16:55.822 "auth": { 00:16:55.822 "state": "completed", 00:16:55.822 "digest": "sha512", 00:16:55.822 "dhgroup": "ffdhe8192" 00:16:55.822 } 00:16:55.822 } 00:16:55.822 ]' 00:16:55.822 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.822 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.822 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.086 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.086 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.086 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.086 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.086 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.346 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:56.346 09:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:16:56.914 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.914 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.914 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.914 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.914 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.914 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.914 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.914 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.915 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:56.915 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.915 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.915 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.915 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.915 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.915 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.915 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.915 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.173 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.173 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.173 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.173 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.432 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.691 { 00:16:57.691 "cntlid": 139, 00:16:57.691 "qid": 0, 00:16:57.691 "state": "enabled", 00:16:57.691 "thread": "nvmf_tgt_poll_group_000", 00:16:57.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.691 "listen_address": { 00:16:57.691 "trtype": "TCP", 00:16:57.691 "adrfam": "IPv4", 00:16:57.691 "traddr": "10.0.0.2", 00:16:57.691 "trsvcid": "4420" 00:16:57.691 }, 00:16:57.691 "peer_address": { 00:16:57.691 "trtype": "TCP", 00:16:57.691 "adrfam": "IPv4", 00:16:57.691 "traddr": "10.0.0.1", 00:16:57.691 "trsvcid": "32778" 00:16:57.691 }, 00:16:57.691 "auth": { 00:16:57.691 "state": "completed", 00:16:57.691 "digest": "sha512", 00:16:57.691 "dhgroup": "ffdhe8192" 00:16:57.691 } 00:16:57.691 } 00:16:57.691 ]' 00:16:57.691 09:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.691 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.691 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.950 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.950 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.950 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.950 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.950 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.209 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:58.209 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: --dhchap-ctrl-secret DHHC-1:02:MjI4ZWYwNzliOGRhYzlhYzUxOTE4M2RhY2FhMTUyZWYwYTM2OThiZDcyNjUzZmFhmRQ7uQ==: 00:16:58.777 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.777 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.777 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.777 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.777 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.777 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.777 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.777 09:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.777 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.345 00:16:59.346 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.346 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.346 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.606 { 00:16:59.606 "cntlid": 141, 00:16:59.606 "qid": 0, 00:16:59.606 "state": "enabled", 00:16:59.606 "thread": "nvmf_tgt_poll_group_000", 00:16:59.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.606 "listen_address": { 00:16:59.606 "trtype": "TCP", 00:16:59.606 "adrfam": "IPv4", 00:16:59.606 "traddr": "10.0.0.2", 00:16:59.606 "trsvcid": "4420" 00:16:59.606 }, 00:16:59.606 "peer_address": { 00:16:59.606 "trtype": "TCP", 00:16:59.606 "adrfam": "IPv4", 00:16:59.606 "traddr": "10.0.0.1", 00:16:59.606 "trsvcid": "32818" 00:16:59.606 }, 00:16:59.606 "auth": { 00:16:59.606 "state": "completed", 00:16:59.606 "digest": "sha512", 00:16:59.606 "dhgroup": "ffdhe8192" 00:16:59.606 } 00:16:59.606 } 00:16:59.606 ]' 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.606 09:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.866 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:16:59.866 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:01:YmJiMjAwYjMyNDViNzEzY2FiMjQyZGNiNjlhZjYzNDYiI5pa: 00:17:00.435 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.436 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.436 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.436 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.436 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.436 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.436 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.436 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.695 09:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.263 00:17:01.263 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.264 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.264 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.523 { 00:17:01.523 "cntlid": 143, 00:17:01.523 "qid": 0, 00:17:01.523 "state": "enabled", 00:17:01.523 "thread": "nvmf_tgt_poll_group_000", 00:17:01.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.523 "listen_address": { 00:17:01.523 "trtype": "TCP", 00:17:01.523 "adrfam": "IPv4", 00:17:01.523 "traddr": "10.0.0.2", 00:17:01.523 "trsvcid": "4420" 00:17:01.523 }, 00:17:01.523 "peer_address": { 00:17:01.523 "trtype": "TCP", 00:17:01.523 "adrfam": "IPv4", 00:17:01.523 "traddr": "10.0.0.1", 00:17:01.523 "trsvcid": "40628" 00:17:01.523 }, 00:17:01.523 "auth": { 00:17:01.523 "state": "completed", 00:17:01.523 "digest": "sha512", 00:17:01.523 "dhgroup": "ffdhe8192" 00:17:01.523 } 00:17:01.523 } 00:17:01.523 ]' 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.523 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.781 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:17:01.781 09:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.348 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.606 09:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.175 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.175 { 00:17:03.175 "cntlid": 145, 00:17:03.175 "qid": 0, 00:17:03.175 "state": "enabled", 00:17:03.175 "thread": "nvmf_tgt_poll_group_000", 00:17:03.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.175 "listen_address": { 00:17:03.175 "trtype": "TCP", 00:17:03.175 "adrfam": "IPv4", 00:17:03.175 "traddr": "10.0.0.2", 00:17:03.175 "trsvcid": "4420" 00:17:03.175 }, 00:17:03.175 "peer_address": { 00:17:03.175 "trtype": "TCP", 00:17:03.175 "adrfam": "IPv4", 00:17:03.175 "traddr": "10.0.0.1", 00:17:03.175 "trsvcid": "40646" 00:17:03.175 }, 00:17:03.175 "auth": { 00:17:03.175 "state": "completed", 00:17:03.175 "digest": "sha512", 00:17:03.175 "dhgroup": "ffdhe8192" 00:17:03.175 } 00:17:03.175 } 00:17:03.175 ]' 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.175 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.433 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.433 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.433 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.433 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.433 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.433 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.692 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:17:03.692 09:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NmNkZjBkMDJmZmQwMTI4Y2FlNTY0M2IzOGNjMGQyYzBlYmRiODE1ZTRlNWE0YTMxPoaPKw==: --dhchap-ctrl-secret DHHC-1:03:YTg3ODMxYWYyY2NhZGRiMGU4YzZlMzNhM2VlYTU3OGEwNDgxNDE5MTFlNjJjNTcwOTQwNTNjZmRjZmFlMGJhZj6RRak=: 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:04.260 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:04.519 request: 00:17:04.519 { 00:17:04.519 "name": "nvme0", 00:17:04.519 "trtype": "tcp", 00:17:04.519 "traddr": "10.0.0.2", 00:17:04.519 "adrfam": "ipv4", 00:17:04.519 "trsvcid": "4420", 00:17:04.519 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.519 "prchk_reftag": false, 00:17:04.519 "prchk_guard": false, 00:17:04.519 "hdgst": false, 00:17:04.519 "ddgst": false, 00:17:04.519 "dhchap_key": "key2", 00:17:04.519 "allow_unrecognized_csi": false, 00:17:04.519 "method": "bdev_nvme_attach_controller", 00:17:04.519 "req_id": 1 00:17:04.519 } 00:17:04.519 Got JSON-RPC error response 00:17:04.519 response: 00:17:04.519 { 00:17:04.519 "code": -5, 00:17:04.519 "message": "Input/output error" 00:17:04.519 } 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.778 09:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.037 request: 00:17:05.037 { 00:17:05.037 "name": "nvme0", 00:17:05.037 "trtype": "tcp", 00:17:05.037 "traddr": "10.0.0.2", 00:17:05.037 "adrfam": "ipv4", 00:17:05.037 "trsvcid": "4420", 00:17:05.037 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.037 "prchk_reftag": false, 00:17:05.037 "prchk_guard": false, 00:17:05.037 "hdgst": false, 00:17:05.037 "ddgst": false, 00:17:05.037 "dhchap_key": "key1", 00:17:05.037 "dhchap_ctrlr_key": "ckey2", 00:17:05.037 "allow_unrecognized_csi": false, 00:17:05.037 "method": "bdev_nvme_attach_controller", 00:17:05.037 "req_id": 1 00:17:05.037 } 00:17:05.037 Got JSON-RPC error response 00:17:05.037 response: 00:17:05.037 { 00:17:05.037 "code": -5, 00:17:05.037 "message": "Input/output error" 00:17:05.037 } 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.037 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.295 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.582 request: 00:17:05.582 { 00:17:05.582 "name": "nvme0", 00:17:05.582 "trtype": "tcp", 00:17:05.582 "traddr": "10.0.0.2", 00:17:05.582 "adrfam": "ipv4", 00:17:05.582 "trsvcid": "4420", 00:17:05.582 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.582 "prchk_reftag": false, 00:17:05.582 "prchk_guard": false, 00:17:05.582 "hdgst": false, 00:17:05.582 "ddgst": false, 00:17:05.582 "dhchap_key": "key1", 00:17:05.582 "dhchap_ctrlr_key": "ckey1", 00:17:05.582 "allow_unrecognized_csi": false, 00:17:05.582 "method": "bdev_nvme_attach_controller", 00:17:05.582 "req_id": 1 00:17:05.582 } 00:17:05.582 Got JSON-RPC error response 00:17:05.582 response: 00:17:05.582 { 00:17:05.582 "code": -5, 00:17:05.582 "message": "Input/output error" 00:17:05.582 } 00:17:05.582 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.582 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.582 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2892386 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2892386 ']' 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2892386 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892386 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892386' 00:17:05.583 killing process with pid 2892386 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2892386 00:17:05.583 09:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2892386 00:17:05.842 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:05.842 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.842 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2914620 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2914620 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2914620 ']' 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.843 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2914620 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2914620 ']' 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.101 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.360 null0 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wvq 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Ii6 ]] 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ii6 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0G4 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.360 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.4R1 ]] 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4R1 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.c2W 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.IHP ]] 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IHP 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.L5b 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.361 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.621 09:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.190 nvme0n1 00:17:07.190 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.190 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.190 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.449 { 00:17:07.449 "cntlid": 1, 00:17:07.449 "qid": 0, 00:17:07.449 "state": "enabled", 00:17:07.449 "thread": "nvmf_tgt_poll_group_000", 00:17:07.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.449 "listen_address": { 00:17:07.449 "trtype": "TCP", 00:17:07.449 "adrfam": "IPv4", 00:17:07.449 "traddr": "10.0.0.2", 00:17:07.449 "trsvcid": "4420" 00:17:07.449 }, 00:17:07.449 "peer_address": { 00:17:07.449 "trtype": "TCP", 00:17:07.449 "adrfam": "IPv4", 00:17:07.449 "traddr": "10.0.0.1", 00:17:07.449 "trsvcid": "40686" 00:17:07.449 }, 00:17:07.449 "auth": { 00:17:07.449 "state": "completed", 00:17:07.449 "digest": "sha512", 00:17:07.449 "dhgroup": "ffdhe8192" 00:17:07.449 } 00:17:07.449 } 00:17:07.449 ]' 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.449 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.708 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.708 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.708 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.708 09:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.708 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:17:07.708 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:17:08.279 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.611 09:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.959 request: 00:17:08.959 { 00:17:08.959 "name": "nvme0", 00:17:08.959 "trtype": "tcp", 00:17:08.959 "traddr": "10.0.0.2", 00:17:08.959 "adrfam": "ipv4", 00:17:08.959 "trsvcid": "4420", 00:17:08.959 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.959 "prchk_reftag": false, 00:17:08.959 "prchk_guard": false, 00:17:08.959 "hdgst": false, 00:17:08.959 "ddgst": false, 00:17:08.959 "dhchap_key": "key3", 00:17:08.959 "allow_unrecognized_csi": false, 00:17:08.959 "method": "bdev_nvme_attach_controller", 00:17:08.959 "req_id": 1 00:17:08.959 } 00:17:08.959 Got JSON-RPC error response 00:17:08.959 response: 00:17:08.959 { 00:17:08.959 "code": -5, 00:17:08.959 "message": "Input/output error" 00:17:08.959 } 00:17:08.959 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.959 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.959 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.959 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.959 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:08.959 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:08.959 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.960 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.244 request: 00:17:09.244 { 00:17:09.244 "name": "nvme0", 00:17:09.244 "trtype": "tcp", 00:17:09.244 "traddr": "10.0.0.2", 00:17:09.244 "adrfam": "ipv4", 00:17:09.244 "trsvcid": "4420", 00:17:09.244 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.244 "prchk_reftag": false, 00:17:09.244 "prchk_guard": false, 00:17:09.244 "hdgst": false, 00:17:09.244 "ddgst": false, 00:17:09.244 "dhchap_key": "key3", 00:17:09.244 "allow_unrecognized_csi": false, 00:17:09.244 "method": "bdev_nvme_attach_controller", 00:17:09.244 "req_id": 1 00:17:09.244 } 00:17:09.244 Got JSON-RPC error response 00:17:09.244 response: 00:17:09.244 { 00:17:09.244 "code": -5, 00:17:09.244 "message": "Input/output error" 00:17:09.244 } 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.244 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.503 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.503 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.503 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.503 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.503 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.503 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.504 09:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.762 request: 00:17:09.762 { 00:17:09.762 "name": "nvme0", 00:17:09.762 "trtype": "tcp", 00:17:09.762 "traddr": "10.0.0.2", 00:17:09.762 "adrfam": "ipv4", 00:17:09.762 "trsvcid": "4420", 00:17:09.762 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.762 "prchk_reftag": false, 00:17:09.762 "prchk_guard": false, 00:17:09.762 "hdgst": false, 00:17:09.762 "ddgst": false, 00:17:09.762 "dhchap_key": "key0", 00:17:09.762 "dhchap_ctrlr_key": "key1", 00:17:09.762 "allow_unrecognized_csi": false, 00:17:09.762 "method": "bdev_nvme_attach_controller", 00:17:09.762 "req_id": 1 00:17:09.762 } 00:17:09.762 Got JSON-RPC error response 00:17:09.762 response: 00:17:09.762 { 00:17:09.762 "code": -5, 00:17:09.762 "message": "Input/output error" 00:17:09.762 } 00:17:09.762 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.762 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.762 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.762 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.762 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:09.762 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:09.762 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:10.021 nvme0n1 00:17:10.021 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:10.021 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:10.021 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.280 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.280 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.280 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.539 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:10.539 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.539 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.539 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.539 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:10.539 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:10.539 09:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:11.476 nvme0n1 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:11.476 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.734 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.734 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:17:11.734 09:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: --dhchap-ctrl-secret DHHC-1:03:YTlkMTFkMmY1Mzg5MDhmOTQ3ZjYxMmFjMmFmNDFmOTU2ZmQxYTZiZjAxZjcxOTdlOGEyYmYzYWU4MDcwZDQ4MDxZajY=: 00:17:12.302 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:12.302 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:12.302 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:12.302 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:12.302 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:12.302 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:12.302 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:12.302 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.302 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:12.561 09:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:12.820 request: 00:17:12.820 { 00:17:12.820 "name": "nvme0", 00:17:12.820 "trtype": "tcp", 00:17:12.820 "traddr": "10.0.0.2", 00:17:12.820 "adrfam": "ipv4", 00:17:12.820 "trsvcid": "4420", 00:17:12.820 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:12.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.820 "prchk_reftag": false, 00:17:12.820 "prchk_guard": false, 00:17:12.820 "hdgst": false, 00:17:12.820 "ddgst": false, 00:17:12.820 "dhchap_key": "key1", 00:17:12.820 "allow_unrecognized_csi": false, 00:17:12.820 "method": "bdev_nvme_attach_controller", 00:17:12.820 "req_id": 1 00:17:12.820 } 00:17:12.820 Got JSON-RPC error response 00:17:12.820 response: 00:17:12.820 { 00:17:12.820 "code": -5, 00:17:12.820 "message": "Input/output error" 00:17:12.820 } 00:17:13.079 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:13.079 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:13.080 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:13.080 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:13.080 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:13.080 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:13.080 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:13.648 nvme0n1 00:17:13.648 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:13.648 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:13.648 09:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.907 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.907 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.907 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.166 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.166 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.166 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.166 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.166 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:14.166 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:14.166 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:14.424 nvme0n1 00:17:14.424 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:14.424 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:14.424 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.682 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.682 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.682 09:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: '' 2s 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: ]] 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTc2NGFlYjZiY2RjMjZmMjkxMzY5YzliMmJlNWY1Nza2AJyI: 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:14.940 09:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: 2s 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: ]] 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODE0ZGQ1ODI0N2U0MTZiYTk1NGQ4NzdhYWI1ZjNiODUwMDc4YTcwOWUxZmZmYWFj0++gkw==: 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:16.845 09:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.379 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:19.638 nvme0n1 00:17:19.638 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.638 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.638 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.638 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.638 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.638 09:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.204 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:20.204 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:20.204 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.463 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.463 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.463 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.463 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.463 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.463 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:20.463 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:20.723 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:20.723 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:20.723 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.723 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.723 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.723 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.723 09:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.723 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:21.292 request: 00:17:21.292 { 00:17:21.292 "name": "nvme0", 00:17:21.292 "dhchap_key": "key1", 00:17:21.292 "dhchap_ctrlr_key": "key3", 00:17:21.292 "method": "bdev_nvme_set_keys", 00:17:21.292 "req_id": 1 00:17:21.292 } 00:17:21.292 Got JSON-RPC error response 00:17:21.292 response: 00:17:21.292 { 00:17:21.292 "code": -13, 00:17:21.292 "message": "Permission denied" 00:17:21.292 } 00:17:21.292 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:21.292 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.292 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.292 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.292 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:21.292 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:21.292 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.550 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:21.550 09:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:22.483 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:22.483 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:22.483 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.741 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:22.741 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.741 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.741 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.741 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.741 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.741 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.741 09:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:23.306 nvme0n1 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.564 09:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.822 request: 00:17:23.822 { 00:17:23.822 "name": "nvme0", 00:17:23.822 "dhchap_key": "key2", 00:17:23.822 "dhchap_ctrlr_key": "key0", 00:17:23.822 "method": "bdev_nvme_set_keys", 00:17:23.822 "req_id": 1 00:17:23.822 } 00:17:23.822 Got JSON-RPC error response 00:17:23.822 response: 00:17:23.822 { 00:17:23.822 "code": -13, 00:17:23.822 "message": "Permission denied" 00:17:23.822 } 00:17:23.822 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.822 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.822 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.822 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.822 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:23.822 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:23.822 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.080 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:24.080 09:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:25.014 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:25.014 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:25.014 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2892412 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2892412 ']' 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2892412 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892412 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892412' 00:17:25.274 killing process with pid 2892412 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2892412 00:17:25.274 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2892412 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.842 rmmod nvme_tcp 00:17:25.842 rmmod nvme_fabrics 00:17:25.842 rmmod nvme_keyring 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2914620 ']' 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2914620 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2914620 ']' 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2914620 00:17:25.842 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:25.843 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.843 09:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2914620 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2914620' 00:17:25.843 killing process with pid 2914620 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2914620 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2914620 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.843 09:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wvq /tmp/spdk.key-sha256.0G4 /tmp/spdk.key-sha384.c2W /tmp/spdk.key-sha512.L5b /tmp/spdk.key-sha512.Ii6 /tmp/spdk.key-sha384.4R1 /tmp/spdk.key-sha256.IHP '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:28.385 00:17:28.385 real 2m33.919s 00:17:28.385 user 5m54.996s 00:17:28.385 sys 0m24.552s 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.385 ************************************ 00:17:28.385 END TEST nvmf_auth_target 00:17:28.385 ************************************ 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.385 ************************************ 00:17:28.385 START TEST nvmf_bdevio_no_huge 00:17:28.385 ************************************ 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:28.385 * Looking for test storage... 00:17:28.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1703 -- # lcov --version 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:17:28.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.385 --rc genhtml_branch_coverage=1 00:17:28.385 --rc genhtml_function_coverage=1 00:17:28.385 --rc genhtml_legend=1 00:17:28.385 --rc geninfo_all_blocks=1 00:17:28.385 --rc geninfo_unexecuted_blocks=1 00:17:28.385 00:17:28.385 ' 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:17:28.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.385 --rc genhtml_branch_coverage=1 00:17:28.385 --rc genhtml_function_coverage=1 00:17:28.385 --rc genhtml_legend=1 00:17:28.385 --rc geninfo_all_blocks=1 00:17:28.385 --rc geninfo_unexecuted_blocks=1 00:17:28.385 00:17:28.385 ' 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:17:28.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.385 --rc genhtml_branch_coverage=1 00:17:28.385 --rc genhtml_function_coverage=1 00:17:28.385 --rc genhtml_legend=1 00:17:28.385 --rc geninfo_all_blocks=1 00:17:28.385 --rc geninfo_unexecuted_blocks=1 00:17:28.385 00:17:28.385 ' 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:17:28.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.385 --rc genhtml_branch_coverage=1 00:17:28.385 --rc genhtml_function_coverage=1 00:17:28.385 --rc genhtml_legend=1 00:17:28.385 --rc geninfo_all_blocks=1 00:17:28.385 --rc geninfo_unexecuted_blocks=1 00:17:28.385 00:17:28.385 ' 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.385 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:28.386 09:49:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:34.956 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:34.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:34.956 Found net devices under 0000:86:00.0: cvl_0_0 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:34.956 Found net devices under 0000:86:00.1: cvl_0_1 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.956 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:34.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:17:34.957 00:17:34.957 --- 10.0.0.2 ping statistics --- 00:17:34.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.957 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:17:34.957 00:17:34.957 --- 10.0.0.1 ping statistics --- 00:17:34.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.957 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2921503 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2921503 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2921503 ']' 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.957 09:49:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.957 [2024-11-20 09:49:57.552302] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:17:34.957 [2024-11-20 09:49:57.552352] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:34.957 [2024-11-20 09:49:57.638150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.957 [2024-11-20 09:49:57.685477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.957 [2024-11-20 09:49:57.685511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.957 [2024-11-20 09:49:57.685519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.957 [2024-11-20 09:49:57.685524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.957 [2024-11-20 09:49:57.685529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.957 [2024-11-20 09:49:57.686720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:34.957 [2024-11-20 09:49:57.686752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:34.957 [2024-11-20 09:49:57.686861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.957 [2024-11-20 09:49:57.686863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.216 [2024-11-20 09:49:58.445436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.216 Malloc0 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.216 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.217 [2024-11-20 09:49:58.489697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:35.217 { 00:17:35.217 "params": { 00:17:35.217 "name": "Nvme$subsystem", 00:17:35.217 "trtype": "$TEST_TRANSPORT", 00:17:35.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:35.217 "adrfam": "ipv4", 00:17:35.217 "trsvcid": "$NVMF_PORT", 00:17:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:35.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:35.217 "hdgst": ${hdgst:-false}, 00:17:35.217 "ddgst": ${ddgst:-false} 00:17:35.217 }, 00:17:35.217 "method": "bdev_nvme_attach_controller" 00:17:35.217 } 00:17:35.217 EOF 00:17:35.217 )") 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:35.217 09:49:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:35.217 "params": { 00:17:35.217 "name": "Nvme1", 00:17:35.217 "trtype": "tcp", 00:17:35.217 "traddr": "10.0.0.2", 00:17:35.217 "adrfam": "ipv4", 00:17:35.217 "trsvcid": "4420", 00:17:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:35.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:35.217 "hdgst": false, 00:17:35.217 "ddgst": false 00:17:35.217 }, 00:17:35.217 "method": "bdev_nvme_attach_controller" 00:17:35.217 }' 00:17:35.217 [2024-11-20 09:49:58.541832] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:17:35.217 [2024-11-20 09:49:58.541877] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2921749 ] 00:17:35.476 [2024-11-20 09:49:58.622027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.476 [2024-11-20 09:49:58.671121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.476 [2024-11-20 09:49:58.671231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.476 [2024-11-20 09:49:58.671231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.735 I/O targets: 00:17:35.735 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:35.735 00:17:35.735 00:17:35.735 CUnit - A unit testing framework for C - Version 2.1-3 00:17:35.735 http://cunit.sourceforge.net/ 00:17:35.735 00:17:35.735 00:17:35.735 Suite: bdevio tests on: Nvme1n1 00:17:35.735 Test: blockdev write read block ...passed 00:17:35.994 Test: blockdev write zeroes read block ...passed 00:17:35.994 Test: blockdev write zeroes read no split ...passed 00:17:35.994 Test: blockdev write zeroes read split ...passed 00:17:35.994 Test: blockdev write zeroes read split partial ...passed 00:17:35.994 Test: blockdev reset ...[2024-11-20 09:49:59.163699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:35.994 [2024-11-20 09:49:59.163765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26cd920 (9): Bad file descriptor 00:17:35.994 [2024-11-20 09:49:59.258329] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:35.994 passed 00:17:35.994 Test: blockdev write read 8 blocks ...passed 00:17:35.994 Test: blockdev write read size > 128k ...passed 00:17:35.994 Test: blockdev write read invalid size ...passed 00:17:35.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:35.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:35.994 Test: blockdev write read max offset ...passed 00:17:36.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:36.254 Test: blockdev writev readv 8 blocks ...passed 00:17:36.254 Test: blockdev writev readv 30 x 1block ...passed 00:17:36.254 Test: blockdev writev readv block ...passed 00:17:36.254 Test: blockdev writev readv size > 128k ...passed 00:17:36.254 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:36.254 Test: blockdev comparev and writev ...[2024-11-20 09:49:59.427679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.254 [2024-11-20 09:49:59.427710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.427725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.254 [2024-11-20 09:49:59.427732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.427961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.254 [2024-11-20 09:49:59.427979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.427991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.254 [2024-11-20 09:49:59.427999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.428235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.254 [2024-11-20 09:49:59.428245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.428257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.254 [2024-11-20 09:49:59.428264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.428507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.254 [2024-11-20 09:49:59.428517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.428528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:36.254 [2024-11-20 09:49:59.428535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:36.254 passed 00:17:36.254 Test: blockdev nvme passthru rw ...passed 00:17:36.254 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:49:59.510222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.254 [2024-11-20 09:49:59.510240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.510345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.254 [2024-11-20 09:49:59.510355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.510456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.254 [2024-11-20 09:49:59.510465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:36.254 [2024-11-20 09:49:59.510565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:36.254 [2024-11-20 09:49:59.510574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:36.254 passed 00:17:36.254 Test: blockdev nvme admin passthru ...passed 00:17:36.254 Test: blockdev copy ...passed 00:17:36.254 00:17:36.254 Run Summary: Type Total Ran Passed Failed Inactive 00:17:36.254 suites 1 1 n/a 0 0 00:17:36.254 tests 23 23 23 0 0 00:17:36.254 asserts 152 152 152 0 n/a 00:17:36.254 00:17:36.254 Elapsed time = 1.143 seconds 00:17:36.513 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:36.513 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.513 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.772 rmmod nvme_tcp 00:17:36.772 rmmod nvme_fabrics 00:17:36.772 rmmod nvme_keyring 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2921503 ']' 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2921503 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2921503 ']' 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2921503 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2921503 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2921503' 00:17:36.772 killing process with pid 2921503 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2921503 00:17:36.772 09:49:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2921503 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.031 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.032 09:50:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:39.570 00:17:39.570 real 0m11.016s 00:17:39.570 user 0m14.451s 00:17:39.570 sys 0m5.388s 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:39.570 ************************************ 00:17:39.570 END TEST nvmf_bdevio_no_huge 00:17:39.570 ************************************ 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.570 ************************************ 00:17:39.570 START TEST nvmf_tls 00:17:39.570 ************************************ 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:39.570 * Looking for test storage... 00:17:39.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1703 -- # lcov --version 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:17:39.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.570 --rc genhtml_branch_coverage=1 00:17:39.570 --rc genhtml_function_coverage=1 00:17:39.570 --rc genhtml_legend=1 00:17:39.570 --rc geninfo_all_blocks=1 00:17:39.570 --rc geninfo_unexecuted_blocks=1 00:17:39.570 00:17:39.570 ' 00:17:39.570 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:17:39.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.570 --rc genhtml_branch_coverage=1 00:17:39.570 --rc genhtml_function_coverage=1 00:17:39.570 --rc genhtml_legend=1 00:17:39.570 --rc geninfo_all_blocks=1 00:17:39.571 --rc geninfo_unexecuted_blocks=1 00:17:39.571 00:17:39.571 ' 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:17:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.571 --rc genhtml_branch_coverage=1 00:17:39.571 --rc genhtml_function_coverage=1 00:17:39.571 --rc genhtml_legend=1 00:17:39.571 --rc geninfo_all_blocks=1 00:17:39.571 --rc geninfo_unexecuted_blocks=1 00:17:39.571 00:17:39.571 ' 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:17:39.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.571 --rc genhtml_branch_coverage=1 00:17:39.571 --rc genhtml_function_coverage=1 00:17:39.571 --rc genhtml_legend=1 00:17:39.571 --rc geninfo_all_blocks=1 00:17:39.571 --rc geninfo_unexecuted_blocks=1 00:17:39.571 00:17:39.571 ' 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.571 09:50:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.143 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:46.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:46.144 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:46.144 Found net devices under 0000:86:00.0: cvl_0_0 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:46.144 Found net devices under 0000:86:00.1: cvl_0_1 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:17:46.144 00:17:46.144 --- 10.0.0.2 ping statistics --- 00:17:46.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.144 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:17:46.144 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:17:46.144 00:17:46.144 --- 10.0.0.1 ping statistics --- 00:17:46.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.145 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2925509 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2925509 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2925509 ']' 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.145 [2024-11-20 09:50:08.631243] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:17:46.145 [2024-11-20 09:50:08.631287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.145 [2024-11-20 09:50:08.711531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.145 [2024-11-20 09:50:08.753011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.145 [2024-11-20 09:50:08.753047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.145 [2024-11-20 09:50:08.753055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.145 [2024-11-20 09:50:08.753062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.145 [2024-11-20 09:50:08.753067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.145 [2024-11-20 09:50:08.753632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:46.145 09:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:46.145 true 00:17:46.145 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.145 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:46.145 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:46.145 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:46.145 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:46.145 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.145 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:46.404 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:46.404 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:46.404 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:46.663 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.663 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:46.663 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:46.664 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:46.664 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.664 09:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:46.923 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:46.923 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:46.923 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:47.182 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.182 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:47.441 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:47.441 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:47.441 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:47.441 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.441 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:47.700 09:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:47.700 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:47.700 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:47.700 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.aQDwxO8lyD 00:17:47.700 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:47.700 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.KwVVRpH0ue 00:17:47.700 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:47.700 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:47.700 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aQDwxO8lyD 00:17:47.959 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.KwVVRpH0ue 00:17:47.959 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:47.959 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:48.219 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.aQDwxO8lyD 00:17:48.219 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.aQDwxO8lyD 00:17:48.219 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:48.479 [2024-11-20 09:50:11.668333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.479 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:48.738 09:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:48.738 [2024-11-20 09:50:12.033282] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.738 [2024-11-20 09:50:12.033471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.738 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:48.997 malloc0 00:17:48.997 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:49.256 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.aQDwxO8lyD 00:17:49.515 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:49.515 09:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aQDwxO8lyD 00:18:01.737 Initializing NVMe Controllers 00:18:01.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.738 Initialization complete. Launching workers. 00:18:01.738 ======================================================== 00:18:01.738 Latency(us) 00:18:01.738 Device Information : IOPS MiB/s Average min max 00:18:01.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16387.39 64.01 3905.55 876.02 6169.17 00:18:01.738 ======================================================== 00:18:01.738 Total : 16387.39 64.01 3905.55 876.02 6169.17 00:18:01.738 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQDwxO8lyD 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aQDwxO8lyD 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2927858 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2927858 /var/tmp/bdevperf.sock 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2927858 ']' 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.738 09:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.738 [2024-11-20 09:50:22.958980] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:01.738 [2024-11-20 09:50:22.959029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927858 ] 00:18:01.738 [2024-11-20 09:50:23.032650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.738 [2024-11-20 09:50:23.072917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.738 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.738 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.738 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aQDwxO8lyD 00:18:01.738 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:01.738 [2024-11-20 09:50:23.544538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.738 TLSTESTn1 00:18:01.738 09:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.738 Running I/O for 10 seconds... 00:18:02.674 5229.00 IOPS, 20.43 MiB/s [2024-11-20T08:50:26.943Z] 5364.00 IOPS, 20.95 MiB/s [2024-11-20T08:50:27.880Z] 5440.33 IOPS, 21.25 MiB/s [2024-11-20T08:50:28.818Z] 5432.25 IOPS, 21.22 MiB/s [2024-11-20T08:50:29.770Z] 5430.60 IOPS, 21.21 MiB/s [2024-11-20T08:50:31.230Z] 5421.17 IOPS, 21.18 MiB/s [2024-11-20T08:50:31.911Z] 5402.14 IOPS, 21.10 MiB/s [2024-11-20T08:50:32.847Z] 5415.25 IOPS, 21.15 MiB/s [2024-11-20T08:50:33.783Z] 5419.11 IOPS, 21.17 MiB/s [2024-11-20T08:50:33.783Z] 5397.60 IOPS, 21.08 MiB/s 00:18:10.451 Latency(us) 00:18:10.451 [2024-11-20T08:50:33.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.451 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.451 Verification LBA range: start 0x0 length 0x2000 00:18:10.451 TLSTESTn1 : 10.02 5401.09 21.10 0.00 0.00 23662.91 4986.43 30773.43 00:18:10.451 [2024-11-20T08:50:33.783Z] =================================================================================================================== 00:18:10.451 [2024-11-20T08:50:33.783Z] Total : 5401.09 21.10 0.00 0.00 23662.91 4986.43 30773.43 00:18:10.451 { 00:18:10.451 "results": [ 00:18:10.451 { 00:18:10.451 "job": "TLSTESTn1", 00:18:10.451 "core_mask": "0x4", 00:18:10.451 "workload": "verify", 00:18:10.451 "status": "finished", 00:18:10.451 "verify_range": { 00:18:10.451 "start": 0, 00:18:10.451 "length": 8192 00:18:10.451 }, 00:18:10.451 "queue_depth": 128, 00:18:10.451 "io_size": 4096, 00:18:10.451 "runtime": 10.01723, 00:18:10.451 "iops": 5401.093915184138, 00:18:10.451 "mibps": 21.098023106188037, 00:18:10.451 "io_failed": 0, 00:18:10.451 "io_timeout": 0, 00:18:10.451 "avg_latency_us": 23662.907911461985, 00:18:10.451 "min_latency_us": 4986.434782608696, 00:18:10.451 "max_latency_us": 30773.426086956522 00:18:10.451 } 00:18:10.451 ], 00:18:10.451 "core_count": 1 00:18:10.451 } 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2927858 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2927858 ']' 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2927858 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927858 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927858' 00:18:10.710 killing process with pid 2927858 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2927858 00:18:10.710 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.710 00:18:10.710 Latency(us) 00:18:10.710 [2024-11-20T08:50:34.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.710 [2024-11-20T08:50:34.042Z] =================================================================================================================== 00:18:10.710 [2024-11-20T08:50:34.042Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2927858 00:18:10.710 09:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KwVVRpH0ue 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KwVVRpH0ue 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KwVVRpH0ue 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KwVVRpH0ue 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2929698 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2929698 /var/tmp/bdevperf.sock 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2929698 ']' 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.710 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.969 [2024-11-20 09:50:34.054454] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:10.969 [2024-11-20 09:50:34.054501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929698 ] 00:18:10.969 [2024-11-20 09:50:34.123382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.969 [2024-11-20 09:50:34.163334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.969 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.969 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.969 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KwVVRpH0ue 00:18:11.228 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.487 [2024-11-20 09:50:34.638529] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.487 [2024-11-20 09:50:34.650347] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:11.487 [2024-11-20 09:50:34.650928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd06170 (107): Transport endpoint is not connected 00:18:11.487 [2024-11-20 09:50:34.651922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd06170 (9): Bad file descriptor 00:18:11.487 [2024-11-20 09:50:34.652924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:11.487 [2024-11-20 09:50:34.652933] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:11.487 [2024-11-20 09:50:34.652940] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:11.487 [2024-11-20 09:50:34.652954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:11.487 request: 00:18:11.487 { 00:18:11.487 "name": "TLSTEST", 00:18:11.487 "trtype": "tcp", 00:18:11.487 "traddr": "10.0.0.2", 00:18:11.487 "adrfam": "ipv4", 00:18:11.487 "trsvcid": "4420", 00:18:11.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.487 "prchk_reftag": false, 00:18:11.487 "prchk_guard": false, 00:18:11.487 "hdgst": false, 00:18:11.487 "ddgst": false, 00:18:11.487 "psk": "key0", 00:18:11.487 "allow_unrecognized_csi": false, 00:18:11.487 "method": "bdev_nvme_attach_controller", 00:18:11.487 "req_id": 1 00:18:11.487 } 00:18:11.487 Got JSON-RPC error response 00:18:11.487 response: 00:18:11.487 { 00:18:11.487 "code": -5, 00:18:11.487 "message": "Input/output error" 00:18:11.487 } 00:18:11.487 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2929698 00:18:11.487 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2929698 ']' 00:18:11.487 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2929698 00:18:11.487 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.487 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.487 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2929698 00:18:11.487 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.487 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.488 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2929698' 00:18:11.488 killing process with pid 2929698 00:18:11.488 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2929698 00:18:11.488 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.488 00:18:11.488 Latency(us) 00:18:11.488 [2024-11-20T08:50:34.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.488 [2024-11-20T08:50:34.820Z] =================================================================================================================== 00:18:11.488 [2024-11-20T08:50:34.820Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.488 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2929698 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aQDwxO8lyD 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aQDwxO8lyD 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aQDwxO8lyD 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aQDwxO8lyD 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2929719 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2929719 /var/tmp/bdevperf.sock 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2929719 ']' 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.748 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.749 09:50:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.749 [2024-11-20 09:50:34.926360] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:11.749 [2024-11-20 09:50:34.926411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929719 ] 00:18:11.749 [2024-11-20 09:50:35.001832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.749 [2024-11-20 09:50:35.040044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.008 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.008 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.008 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aQDwxO8lyD 00:18:12.267 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:12.267 [2024-11-20 09:50:35.523148] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.267 [2024-11-20 09:50:35.532463] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:12.267 [2024-11-20 09:50:35.532483] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:12.267 [2024-11-20 09:50:35.532514] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:12.267 [2024-11-20 09:50:35.532561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5d170 (107): Transport endpoint is not connected 00:18:12.267 [2024-11-20 09:50:35.533554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5d170 (9): Bad file descriptor 00:18:12.267 [2024-11-20 09:50:35.534556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:12.267 [2024-11-20 09:50:35.534565] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:12.267 [2024-11-20 09:50:35.534572] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:12.267 [2024-11-20 09:50:35.534582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:12.267 request: 00:18:12.267 { 00:18:12.267 "name": "TLSTEST", 00:18:12.267 "trtype": "tcp", 00:18:12.267 "traddr": "10.0.0.2", 00:18:12.267 "adrfam": "ipv4", 00:18:12.267 "trsvcid": "4420", 00:18:12.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.267 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:12.267 "prchk_reftag": false, 00:18:12.267 "prchk_guard": false, 00:18:12.267 "hdgst": false, 00:18:12.267 "ddgst": false, 00:18:12.267 "psk": "key0", 00:18:12.267 "allow_unrecognized_csi": false, 00:18:12.267 "method": "bdev_nvme_attach_controller", 00:18:12.267 "req_id": 1 00:18:12.267 } 00:18:12.267 Got JSON-RPC error response 00:18:12.267 response: 00:18:12.267 { 00:18:12.267 "code": -5, 00:18:12.267 "message": "Input/output error" 00:18:12.267 } 00:18:12.267 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2929719 00:18:12.267 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2929719 ']' 00:18:12.267 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2929719 00:18:12.267 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:12.267 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.267 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2929719 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2929719' 00:18:12.525 killing process with pid 2929719 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2929719 00:18:12.525 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.525 00:18:12.525 Latency(us) 00:18:12.525 [2024-11-20T08:50:35.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.525 [2024-11-20T08:50:35.857Z] =================================================================================================================== 00:18:12.525 [2024-11-20T08:50:35.857Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2929719 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.525 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQDwxO8lyD 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQDwxO8lyD 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aQDwxO8lyD 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.aQDwxO8lyD 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2929952 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2929952 /var/tmp/bdevperf.sock 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2929952 ']' 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.526 09:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.526 [2024-11-20 09:50:35.814473] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:12.526 [2024-11-20 09:50:35.814525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929952 ] 00:18:12.785 [2024-11-20 09:50:35.890092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.785 [2024-11-20 09:50:35.929932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.785 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.785 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.785 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aQDwxO8lyD 00:18:13.043 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.303 [2024-11-20 09:50:36.385335] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.303 [2024-11-20 09:50:36.394699] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:13.303 [2024-11-20 09:50:36.394724] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:13.303 [2024-11-20 09:50:36.394747] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:13.303 [2024-11-20 09:50:36.395739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1657170 (107): Transport endpoint is not connected 00:18:13.303 [2024-11-20 09:50:36.396732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1657170 (9): Bad file descriptor 00:18:13.303 [2024-11-20 09:50:36.397734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:13.303 [2024-11-20 09:50:36.397744] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:13.303 [2024-11-20 09:50:36.397752] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:13.303 [2024-11-20 09:50:36.397762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:13.303 request: 00:18:13.303 { 00:18:13.303 "name": "TLSTEST", 00:18:13.303 "trtype": "tcp", 00:18:13.303 "traddr": "10.0.0.2", 00:18:13.303 "adrfam": "ipv4", 00:18:13.303 "trsvcid": "4420", 00:18:13.303 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:13.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.303 "prchk_reftag": false, 00:18:13.303 "prchk_guard": false, 00:18:13.303 "hdgst": false, 00:18:13.303 "ddgst": false, 00:18:13.303 "psk": "key0", 00:18:13.303 "allow_unrecognized_csi": false, 00:18:13.303 "method": "bdev_nvme_attach_controller", 00:18:13.303 "req_id": 1 00:18:13.303 } 00:18:13.303 Got JSON-RPC error response 00:18:13.303 response: 00:18:13.303 { 00:18:13.303 "code": -5, 00:18:13.303 "message": "Input/output error" 00:18:13.303 } 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2929952 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2929952 ']' 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2929952 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2929952 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2929952' 00:18:13.303 killing process with pid 2929952 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2929952 00:18:13.303 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.303 00:18:13.303 Latency(us) 00:18:13.303 [2024-11-20T08:50:36.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.303 [2024-11-20T08:50:36.635Z] =================================================================================================================== 00:18:13.303 [2024-11-20T08:50:36.635Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2929952 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.303 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2930094 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2930094 /var/tmp/bdevperf.sock 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2930094 ']' 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.304 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.563 [2024-11-20 09:50:36.676963] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:13.563 [2024-11-20 09:50:36.677015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930094 ] 00:18:13.564 [2024-11-20 09:50:36.751292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.564 [2024-11-20 09:50:36.790465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.564 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.564 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.564 09:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:13.822 [2024-11-20 09:50:37.060471] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:13.822 [2024-11-20 09:50:37.060506] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:13.822 request: 00:18:13.822 { 00:18:13.822 "name": "key0", 00:18:13.822 "path": "", 00:18:13.822 "method": "keyring_file_add_key", 00:18:13.822 "req_id": 1 00:18:13.822 } 00:18:13.822 Got JSON-RPC error response 00:18:13.822 response: 00:18:13.822 { 00:18:13.822 "code": -1, 00:18:13.822 "message": "Operation not permitted" 00:18:13.822 } 00:18:13.822 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.081 [2024-11-20 09:50:37.257077] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.081 [2024-11-20 09:50:37.257133] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:14.081 request: 00:18:14.081 { 00:18:14.081 "name": "TLSTEST", 00:18:14.081 "trtype": "tcp", 00:18:14.081 "traddr": "10.0.0.2", 00:18:14.081 "adrfam": "ipv4", 00:18:14.081 "trsvcid": "4420", 00:18:14.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.081 "prchk_reftag": false, 00:18:14.081 "prchk_guard": false, 00:18:14.081 "hdgst": false, 00:18:14.081 "ddgst": false, 00:18:14.081 "psk": "key0", 00:18:14.081 "allow_unrecognized_csi": false, 00:18:14.081 "method": "bdev_nvme_attach_controller", 00:18:14.081 "req_id": 1 00:18:14.081 } 00:18:14.081 Got JSON-RPC error response 00:18:14.081 response: 00:18:14.081 { 00:18:14.081 "code": -126, 00:18:14.081 "message": "Required key not available" 00:18:14.081 } 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2930094 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2930094 ']' 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2930094 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2930094 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2930094' 00:18:14.081 killing process with pid 2930094 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2930094 00:18:14.081 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.081 00:18:14.081 Latency(us) 00:18:14.081 [2024-11-20T08:50:37.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.081 [2024-11-20T08:50:37.413Z] =================================================================================================================== 00:18:14.081 [2024-11-20T08:50:37.413Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.081 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2930094 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2925509 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2925509 ']' 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2925509 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2925509 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2925509' 00:18:14.339 killing process with pid 2925509 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2925509 00:18:14.339 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2925509 00:18:14.598 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:14.598 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.QqDxSaR1yt 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.QqDxSaR1yt 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2930217 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2930217 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2930217 ']' 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.599 09:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.599 [2024-11-20 09:50:37.791108] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:14.599 [2024-11-20 09:50:37.791159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.599 [2024-11-20 09:50:37.872663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.599 [2024-11-20 09:50:37.913809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.599 [2024-11-20 09:50:37.913843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.599 [2024-11-20 09:50:37.913851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.599 [2024-11-20 09:50:37.913860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.599 [2024-11-20 09:50:37.913865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.599 [2024-11-20 09:50:37.914428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.858 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.858 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.858 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:14.858 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.858 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.858 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.858 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.QqDxSaR1yt 00:18:14.858 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QqDxSaR1yt 00:18:14.858 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:15.117 [2024-11-20 09:50:38.230479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.117 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:15.117 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:15.374 [2024-11-20 09:50:38.607454] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:15.374 [2024-11-20 09:50:38.607674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.374 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:15.632 malloc0 00:18:15.632 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:15.891 09:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QqDxSaR1yt 00:18:15.891 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.149 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QqDxSaR1yt 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QqDxSaR1yt 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2930544 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2930544 /var/tmp/bdevperf.sock 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2930544 ']' 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.150 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.150 [2024-11-20 09:50:39.403311] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:16.150 [2024-11-20 09:50:39.403361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930544 ] 00:18:16.150 [2024-11-20 09:50:39.480003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.409 [2024-11-20 09:50:39.522384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.409 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.409 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:16.409 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QqDxSaR1yt 00:18:16.667 09:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.667 [2024-11-20 09:50:39.986467] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.924 TLSTESTn1 00:18:16.924 09:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:16.924 Running I/O for 10 seconds... 00:18:19.234 5216.00 IOPS, 20.38 MiB/s [2024-11-20T08:50:43.501Z] 5353.00 IOPS, 20.91 MiB/s [2024-11-20T08:50:44.437Z] 5347.00 IOPS, 20.89 MiB/s [2024-11-20T08:50:45.372Z] 5369.50 IOPS, 20.97 MiB/s [2024-11-20T08:50:46.308Z] 5383.80 IOPS, 21.03 MiB/s [2024-11-20T08:50:47.243Z] 5397.67 IOPS, 21.08 MiB/s [2024-11-20T08:50:48.618Z] 5388.57 IOPS, 21.05 MiB/s [2024-11-20T08:50:49.555Z] 5401.38 IOPS, 21.10 MiB/s [2024-11-20T08:50:50.492Z] 5399.00 IOPS, 21.09 MiB/s [2024-11-20T08:50:50.492Z] 5394.60 IOPS, 21.07 MiB/s 00:18:27.160 Latency(us) 00:18:27.160 [2024-11-20T08:50:50.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.160 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:27.160 Verification LBA range: start 0x0 length 0x2000 00:18:27.160 TLSTESTn1 : 10.02 5399.12 21.09 0.00 0.00 23672.08 5299.87 21655.37 00:18:27.160 [2024-11-20T08:50:50.492Z] =================================================================================================================== 00:18:27.160 [2024-11-20T08:50:50.492Z] Total : 5399.12 21.09 0.00 0.00 23672.08 5299.87 21655.37 00:18:27.160 { 00:18:27.160 "results": [ 00:18:27.160 { 00:18:27.160 "job": "TLSTESTn1", 00:18:27.160 "core_mask": "0x4", 00:18:27.160 "workload": "verify", 00:18:27.160 "status": "finished", 00:18:27.160 "verify_range": { 00:18:27.160 "start": 0, 00:18:27.160 "length": 8192 00:18:27.160 }, 00:18:27.160 "queue_depth": 128, 00:18:27.160 "io_size": 4096, 00:18:27.160 "runtime": 10.015159, 00:18:27.160 "iops": 5399.115480842591, 00:18:27.160 "mibps": 21.09029484704137, 00:18:27.160 "io_failed": 0, 00:18:27.160 "io_timeout": 0, 00:18:27.160 "avg_latency_us": 23672.08089315651, 00:18:27.160 "min_latency_us": 5299.8678260869565, 00:18:27.160 "max_latency_us": 21655.373913043477 00:18:27.160 } 00:18:27.160 ], 00:18:27.160 "core_count": 1 00:18:27.160 } 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2930544 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2930544 ']' 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2930544 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2930544 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2930544' 00:18:27.160 killing process with pid 2930544 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2930544 00:18:27.160 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.160 00:18:27.160 Latency(us) 00:18:27.160 [2024-11-20T08:50:50.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.160 [2024-11-20T08:50:50.492Z] =================================================================================================================== 00:18:27.160 [2024-11-20T08:50:50.492Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2930544 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.QqDxSaR1yt 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QqDxSaR1yt 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QqDxSaR1yt 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QqDxSaR1yt 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QqDxSaR1yt 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2932308 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2932308 /var/tmp/bdevperf.sock 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2932308 ']' 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.160 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.419 [2024-11-20 09:50:50.505377] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:27.419 [2024-11-20 09:50:50.505425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932308 ] 00:18:27.419 [2024-11-20 09:50:50.579956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.419 [2024-11-20 09:50:50.619872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.419 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.419 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.419 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QqDxSaR1yt 00:18:27.678 [2024-11-20 09:50:50.894785] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QqDxSaR1yt': 0100666 00:18:27.678 [2024-11-20 09:50:50.894814] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:27.678 request: 00:18:27.678 { 00:18:27.678 "name": "key0", 00:18:27.678 "path": "/tmp/tmp.QqDxSaR1yt", 00:18:27.678 "method": "keyring_file_add_key", 00:18:27.678 "req_id": 1 00:18:27.678 } 00:18:27.678 Got JSON-RPC error response 00:18:27.678 response: 00:18:27.678 { 00:18:27.678 "code": -1, 00:18:27.678 "message": "Operation not permitted" 00:18:27.678 } 00:18:27.678 09:50:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.937 [2024-11-20 09:50:51.083359] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.937 [2024-11-20 09:50:51.083391] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:27.937 request: 00:18:27.937 { 00:18:27.937 "name": "TLSTEST", 00:18:27.937 "trtype": "tcp", 00:18:27.937 "traddr": "10.0.0.2", 00:18:27.937 "adrfam": "ipv4", 00:18:27.937 "trsvcid": "4420", 00:18:27.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.937 "prchk_reftag": false, 00:18:27.937 "prchk_guard": false, 00:18:27.937 "hdgst": false, 00:18:27.937 "ddgst": false, 00:18:27.937 "psk": "key0", 00:18:27.937 "allow_unrecognized_csi": false, 00:18:27.937 "method": "bdev_nvme_attach_controller", 00:18:27.937 "req_id": 1 00:18:27.937 } 00:18:27.937 Got JSON-RPC error response 00:18:27.937 response: 00:18:27.937 { 00:18:27.937 "code": -126, 00:18:27.937 "message": "Required key not available" 00:18:27.937 } 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2932308 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2932308 ']' 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2932308 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2932308 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2932308' 00:18:27.937 killing process with pid 2932308 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2932308 00:18:27.937 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.937 00:18:27.937 Latency(us) 00:18:27.937 [2024-11-20T08:50:51.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.937 [2024-11-20T08:50:51.269Z] =================================================================================================================== 00:18:27.937 [2024-11-20T08:50:51.269Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.937 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2932308 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2930217 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2930217 ']' 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2930217 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2930217 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2930217' 00:18:28.197 killing process with pid 2930217 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2930217 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2930217 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2932548 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2932548 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2932548 ']' 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.197 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.456 [2024-11-20 09:50:51.570288] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:28.456 [2024-11-20 09:50:51.570338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.456 [2024-11-20 09:50:51.650755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.456 [2024-11-20 09:50:51.686950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.456 [2024-11-20 09:50:51.686986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.456 [2024-11-20 09:50:51.686993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.456 [2024-11-20 09:50:51.687001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.456 [2024-11-20 09:50:51.687007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.456 [2024-11-20 09:50:51.687587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.QqDxSaR1yt 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.QqDxSaR1yt 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.QqDxSaR1yt 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QqDxSaR1yt 00:18:28.715 09:50:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:28.715 [2024-11-20 09:50:52.006012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.715 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:28.974 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:29.232 [2024-11-20 09:50:52.374975] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:29.232 [2024-11-20 09:50:52.375194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.232 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:29.490 malloc0 00:18:29.491 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:29.491 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QqDxSaR1yt 00:18:29.749 [2024-11-20 09:50:52.948601] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QqDxSaR1yt': 0100666 00:18:29.749 [2024-11-20 09:50:52.948630] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:29.749 request: 00:18:29.749 { 00:18:29.749 "name": "key0", 00:18:29.749 "path": "/tmp/tmp.QqDxSaR1yt", 00:18:29.749 "method": "keyring_file_add_key", 00:18:29.749 "req_id": 1 00:18:29.749 } 00:18:29.749 Got JSON-RPC error response 00:18:29.749 response: 00:18:29.749 { 00:18:29.749 "code": -1, 00:18:29.749 "message": "Operation not permitted" 00:18:29.749 } 00:18:29.749 09:50:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.008 [2024-11-20 09:50:53.141124] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:30.008 [2024-11-20 09:50:53.141162] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:30.008 request: 00:18:30.008 { 00:18:30.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.008 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.008 "psk": "key0", 00:18:30.008 "method": "nvmf_subsystem_add_host", 00:18:30.008 "req_id": 1 00:18:30.008 } 00:18:30.008 Got JSON-RPC error response 00:18:30.008 response: 00:18:30.008 { 00:18:30.008 "code": -32603, 00:18:30.008 "message": "Internal error" 00:18:30.008 } 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2932548 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2932548 ']' 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2932548 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2932548 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2932548' 00:18:30.008 killing process with pid 2932548 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2932548 00:18:30.008 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2932548 00:18:30.266 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.QqDxSaR1yt 00:18:30.266 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:30.266 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.266 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.266 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.266 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2932818 00:18:30.266 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.266 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2932818 00:18:30.267 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2932818 ']' 00:18:30.267 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.267 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.267 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.267 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.267 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.267 [2024-11-20 09:50:53.433729] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:30.267 [2024-11-20 09:50:53.433774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.267 [2024-11-20 09:50:53.506632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.267 [2024-11-20 09:50:53.548527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.267 [2024-11-20 09:50:53.548565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.267 [2024-11-20 09:50:53.548572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.267 [2024-11-20 09:50:53.548578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.267 [2024-11-20 09:50:53.548584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.267 [2024-11-20 09:50:53.549173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.526 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.526 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:30.526 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:30.526 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:30.526 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.526 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.526 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.QqDxSaR1yt 00:18:30.526 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QqDxSaR1yt 00:18:30.526 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:30.784 [2024-11-20 09:50:53.856375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.784 09:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:30.784 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.043 [2024-11-20 09:50:54.241370] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.043 [2024-11-20 09:50:54.241575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.043 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:31.301 malloc0 00:18:31.301 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:31.560 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QqDxSaR1yt 00:18:31.560 09:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2933145 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2933145 /var/tmp/bdevperf.sock 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2933145 ']' 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.819 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.819 [2024-11-20 09:50:55.048249] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:31.819 [2024-11-20 09:50:55.048296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2933145 ] 00:18:31.819 [2024-11-20 09:50:55.124090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.077 [2024-11-20 09:50:55.167415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.077 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.077 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.077 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QqDxSaR1yt 00:18:32.334 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.334 [2024-11-20 09:50:55.626321] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.592 TLSTESTn1 00:18:32.592 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:32.852 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:32.852 "subsystems": [ 00:18:32.852 { 00:18:32.852 "subsystem": "keyring", 00:18:32.852 "config": [ 00:18:32.852 { 00:18:32.852 "method": "keyring_file_add_key", 00:18:32.852 "params": { 00:18:32.852 "name": "key0", 00:18:32.852 "path": "/tmp/tmp.QqDxSaR1yt" 00:18:32.852 } 00:18:32.852 } 00:18:32.852 ] 00:18:32.852 }, 00:18:32.852 { 00:18:32.852 "subsystem": "iobuf", 00:18:32.852 "config": [ 00:18:32.852 { 00:18:32.852 "method": "iobuf_set_options", 00:18:32.852 "params": { 00:18:32.852 "small_pool_count": 8192, 00:18:32.852 "large_pool_count": 1024, 00:18:32.852 "small_bufsize": 8192, 00:18:32.852 "large_bufsize": 135168, 00:18:32.852 "enable_numa": false 00:18:32.852 } 00:18:32.852 } 00:18:32.852 ] 00:18:32.852 }, 00:18:32.852 { 00:18:32.852 "subsystem": "sock", 00:18:32.852 "config": [ 00:18:32.852 { 00:18:32.852 "method": "sock_set_default_impl", 00:18:32.852 "params": { 00:18:32.852 "impl_name": "posix" 00:18:32.852 } 00:18:32.852 }, 00:18:32.852 { 00:18:32.852 "method": "sock_impl_set_options", 00:18:32.852 "params": { 00:18:32.852 "impl_name": "ssl", 00:18:32.852 "recv_buf_size": 4096, 00:18:32.852 "send_buf_size": 4096, 00:18:32.852 "enable_recv_pipe": true, 00:18:32.852 "enable_quickack": false, 00:18:32.852 "enable_placement_id": 0, 00:18:32.852 "enable_zerocopy_send_server": true, 00:18:32.852 "enable_zerocopy_send_client": false, 00:18:32.852 "zerocopy_threshold": 0, 00:18:32.852 "tls_version": 0, 00:18:32.852 "enable_ktls": false 00:18:32.852 } 00:18:32.852 }, 00:18:32.852 { 00:18:32.852 "method": "sock_impl_set_options", 00:18:32.852 "params": { 00:18:32.852 "impl_name": "posix", 00:18:32.852 "recv_buf_size": 2097152, 00:18:32.852 "send_buf_size": 2097152, 00:18:32.852 "enable_recv_pipe": true, 00:18:32.852 "enable_quickack": false, 00:18:32.852 "enable_placement_id": 0, 00:18:32.852 "enable_zerocopy_send_server": true, 00:18:32.852 "enable_zerocopy_send_client": false, 00:18:32.852 "zerocopy_threshold": 0, 00:18:32.852 "tls_version": 0, 00:18:32.852 "enable_ktls": false 00:18:32.852 } 00:18:32.852 } 00:18:32.852 ] 00:18:32.852 }, 00:18:32.852 { 00:18:32.852 "subsystem": "vmd", 00:18:32.852 "config": [] 00:18:32.852 }, 00:18:32.852 { 00:18:32.852 "subsystem": "accel", 00:18:32.852 "config": [ 00:18:32.852 { 00:18:32.852 "method": "accel_set_options", 00:18:32.852 "params": { 00:18:32.852 "small_cache_size": 128, 00:18:32.852 "large_cache_size": 16, 00:18:32.852 "task_count": 2048, 00:18:32.852 "sequence_count": 2048, 00:18:32.852 "buf_count": 2048 00:18:32.852 } 00:18:32.852 } 00:18:32.852 ] 00:18:32.852 }, 00:18:32.852 { 00:18:32.852 "subsystem": "bdev", 00:18:32.852 "config": [ 00:18:32.852 { 00:18:32.852 "method": "bdev_set_options", 00:18:32.852 "params": { 00:18:32.852 "bdev_io_pool_size": 65535, 00:18:32.852 "bdev_io_cache_size": 256, 00:18:32.852 "bdev_auto_examine": true, 00:18:32.853 "iobuf_small_cache_size": 128, 00:18:32.853 "iobuf_large_cache_size": 16 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "bdev_raid_set_options", 00:18:32.853 "params": { 00:18:32.853 "process_window_size_kb": 1024, 00:18:32.853 "process_max_bandwidth_mb_sec": 0 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "bdev_iscsi_set_options", 00:18:32.853 "params": { 00:18:32.853 "timeout_sec": 30 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "bdev_nvme_set_options", 00:18:32.853 "params": { 00:18:32.853 "action_on_timeout": "none", 00:18:32.853 "timeout_us": 0, 00:18:32.853 "timeout_admin_us": 0, 00:18:32.853 "keep_alive_timeout_ms": 10000, 00:18:32.853 "arbitration_burst": 0, 00:18:32.853 "low_priority_weight": 0, 00:18:32.853 "medium_priority_weight": 0, 00:18:32.853 "high_priority_weight": 0, 00:18:32.853 "nvme_adminq_poll_period_us": 10000, 00:18:32.853 "nvme_ioq_poll_period_us": 0, 00:18:32.853 "io_queue_requests": 0, 00:18:32.853 "delay_cmd_submit": true, 00:18:32.853 "transport_retry_count": 4, 00:18:32.853 "bdev_retry_count": 3, 00:18:32.853 "transport_ack_timeout": 0, 00:18:32.853 "ctrlr_loss_timeout_sec": 0, 00:18:32.853 "reconnect_delay_sec": 0, 00:18:32.853 "fast_io_fail_timeout_sec": 0, 00:18:32.853 "disable_auto_failback": false, 00:18:32.853 "generate_uuids": false, 00:18:32.853 "transport_tos": 0, 00:18:32.853 "nvme_error_stat": false, 00:18:32.853 "rdma_srq_size": 0, 00:18:32.853 "io_path_stat": false, 00:18:32.853 "allow_accel_sequence": false, 00:18:32.853 "rdma_max_cq_size": 0, 00:18:32.853 "rdma_cm_event_timeout_ms": 0, 00:18:32.853 "dhchap_digests": [ 00:18:32.853 "sha256", 00:18:32.853 "sha384", 00:18:32.853 "sha512" 00:18:32.853 ], 00:18:32.853 "dhchap_dhgroups": [ 00:18:32.853 "null", 00:18:32.853 "ffdhe2048", 00:18:32.853 "ffdhe3072", 00:18:32.853 "ffdhe4096", 00:18:32.853 "ffdhe6144", 00:18:32.853 "ffdhe8192" 00:18:32.853 ] 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "bdev_nvme_set_hotplug", 00:18:32.853 "params": { 00:18:32.853 "period_us": 100000, 00:18:32.853 "enable": false 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "bdev_malloc_create", 00:18:32.853 "params": { 00:18:32.853 "name": "malloc0", 00:18:32.853 "num_blocks": 8192, 00:18:32.853 "block_size": 4096, 00:18:32.853 "physical_block_size": 4096, 00:18:32.853 "uuid": "92d58ec4-cb83-4b86-96a4-ef81c560ff92", 00:18:32.853 "optimal_io_boundary": 0, 00:18:32.853 "md_size": 0, 00:18:32.853 "dif_type": 0, 00:18:32.853 "dif_is_head_of_md": false, 00:18:32.853 "dif_pi_format": 0 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "bdev_wait_for_examine" 00:18:32.853 } 00:18:32.853 ] 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "subsystem": "nbd", 00:18:32.853 "config": [] 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "subsystem": "scheduler", 00:18:32.853 "config": [ 00:18:32.853 { 00:18:32.853 "method": "framework_set_scheduler", 00:18:32.853 "params": { 00:18:32.853 "name": "static" 00:18:32.853 } 00:18:32.853 } 00:18:32.853 ] 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "subsystem": "nvmf", 00:18:32.853 "config": [ 00:18:32.853 { 00:18:32.853 "method": "nvmf_set_config", 00:18:32.853 "params": { 00:18:32.853 "discovery_filter": "match_any", 00:18:32.853 "admin_cmd_passthru": { 00:18:32.853 "identify_ctrlr": false 00:18:32.853 }, 00:18:32.853 "dhchap_digests": [ 00:18:32.853 "sha256", 00:18:32.853 "sha384", 00:18:32.853 "sha512" 00:18:32.853 ], 00:18:32.853 "dhchap_dhgroups": [ 00:18:32.853 "null", 00:18:32.853 "ffdhe2048", 00:18:32.853 "ffdhe3072", 00:18:32.853 "ffdhe4096", 00:18:32.853 "ffdhe6144", 00:18:32.853 "ffdhe8192" 00:18:32.853 ] 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "nvmf_set_max_subsystems", 00:18:32.853 "params": { 00:18:32.853 "max_subsystems": 1024 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "nvmf_set_crdt", 00:18:32.853 "params": { 00:18:32.853 "crdt1": 0, 00:18:32.853 "crdt2": 0, 00:18:32.853 "crdt3": 0 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "nvmf_create_transport", 00:18:32.853 "params": { 00:18:32.853 "trtype": "TCP", 00:18:32.853 "max_queue_depth": 128, 00:18:32.853 "max_io_qpairs_per_ctrlr": 127, 00:18:32.853 "in_capsule_data_size": 4096, 00:18:32.853 "max_io_size": 131072, 00:18:32.853 "io_unit_size": 131072, 00:18:32.853 "max_aq_depth": 128, 00:18:32.853 "num_shared_buffers": 511, 00:18:32.853 "buf_cache_size": 4294967295, 00:18:32.853 "dif_insert_or_strip": false, 00:18:32.853 "zcopy": false, 00:18:32.853 "c2h_success": false, 00:18:32.853 "sock_priority": 0, 00:18:32.853 "abort_timeout_sec": 1, 00:18:32.853 "ack_timeout": 0, 00:18:32.853 "data_wr_pool_size": 0 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "nvmf_create_subsystem", 00:18:32.853 "params": { 00:18:32.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.853 "allow_any_host": false, 00:18:32.853 "serial_number": "SPDK00000000000001", 00:18:32.853 "model_number": "SPDK bdev Controller", 00:18:32.853 "max_namespaces": 10, 00:18:32.853 "min_cntlid": 1, 00:18:32.853 "max_cntlid": 65519, 00:18:32.853 "ana_reporting": false 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "nvmf_subsystem_add_host", 00:18:32.853 "params": { 00:18:32.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.853 "host": "nqn.2016-06.io.spdk:host1", 00:18:32.853 "psk": "key0" 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "nvmf_subsystem_add_ns", 00:18:32.853 "params": { 00:18:32.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.853 "namespace": { 00:18:32.853 "nsid": 1, 00:18:32.853 "bdev_name": "malloc0", 00:18:32.853 "nguid": "92D58EC4CB834B8696A4EF81C560FF92", 00:18:32.853 "uuid": "92d58ec4-cb83-4b86-96a4-ef81c560ff92", 00:18:32.853 "no_auto_visible": false 00:18:32.853 } 00:18:32.853 } 00:18:32.853 }, 00:18:32.853 { 00:18:32.853 "method": "nvmf_subsystem_add_listener", 00:18:32.853 "params": { 00:18:32.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.853 "listen_address": { 00:18:32.853 "trtype": "TCP", 00:18:32.853 "adrfam": "IPv4", 00:18:32.853 "traddr": "10.0.0.2", 00:18:32.853 "trsvcid": "4420" 00:18:32.853 }, 00:18:32.853 "secure_channel": true 00:18:32.853 } 00:18:32.853 } 00:18:32.853 ] 00:18:32.853 } 00:18:32.853 ] 00:18:32.853 }' 00:18:32.853 09:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:33.113 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:33.113 "subsystems": [ 00:18:33.113 { 00:18:33.113 "subsystem": "keyring", 00:18:33.113 "config": [ 00:18:33.113 { 00:18:33.113 "method": "keyring_file_add_key", 00:18:33.113 "params": { 00:18:33.113 "name": "key0", 00:18:33.113 "path": "/tmp/tmp.QqDxSaR1yt" 00:18:33.113 } 00:18:33.113 } 00:18:33.113 ] 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "subsystem": "iobuf", 00:18:33.113 "config": [ 00:18:33.113 { 00:18:33.113 "method": "iobuf_set_options", 00:18:33.113 "params": { 00:18:33.113 "small_pool_count": 8192, 00:18:33.113 "large_pool_count": 1024, 00:18:33.113 "small_bufsize": 8192, 00:18:33.113 "large_bufsize": 135168, 00:18:33.113 "enable_numa": false 00:18:33.113 } 00:18:33.113 } 00:18:33.113 ] 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "subsystem": "sock", 00:18:33.113 "config": [ 00:18:33.113 { 00:18:33.113 "method": "sock_set_default_impl", 00:18:33.113 "params": { 00:18:33.113 "impl_name": "posix" 00:18:33.113 } 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "method": "sock_impl_set_options", 00:18:33.113 "params": { 00:18:33.113 "impl_name": "ssl", 00:18:33.113 "recv_buf_size": 4096, 00:18:33.113 "send_buf_size": 4096, 00:18:33.113 "enable_recv_pipe": true, 00:18:33.113 "enable_quickack": false, 00:18:33.113 "enable_placement_id": 0, 00:18:33.113 "enable_zerocopy_send_server": true, 00:18:33.113 "enable_zerocopy_send_client": false, 00:18:33.113 "zerocopy_threshold": 0, 00:18:33.113 "tls_version": 0, 00:18:33.113 "enable_ktls": false 00:18:33.113 } 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "method": "sock_impl_set_options", 00:18:33.113 "params": { 00:18:33.113 "impl_name": "posix", 00:18:33.113 "recv_buf_size": 2097152, 00:18:33.113 "send_buf_size": 2097152, 00:18:33.113 "enable_recv_pipe": true, 00:18:33.113 "enable_quickack": false, 00:18:33.113 "enable_placement_id": 0, 00:18:33.113 "enable_zerocopy_send_server": true, 00:18:33.113 "enable_zerocopy_send_client": false, 00:18:33.113 "zerocopy_threshold": 0, 00:18:33.113 "tls_version": 0, 00:18:33.113 "enable_ktls": false 00:18:33.113 } 00:18:33.113 } 00:18:33.113 ] 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "subsystem": "vmd", 00:18:33.113 "config": [] 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "subsystem": "accel", 00:18:33.113 "config": [ 00:18:33.113 { 00:18:33.113 "method": "accel_set_options", 00:18:33.113 "params": { 00:18:33.113 "small_cache_size": 128, 00:18:33.113 "large_cache_size": 16, 00:18:33.113 "task_count": 2048, 00:18:33.113 "sequence_count": 2048, 00:18:33.113 "buf_count": 2048 00:18:33.113 } 00:18:33.113 } 00:18:33.113 ] 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "subsystem": "bdev", 00:18:33.113 "config": [ 00:18:33.113 { 00:18:33.113 "method": "bdev_set_options", 00:18:33.113 "params": { 00:18:33.113 "bdev_io_pool_size": 65535, 00:18:33.113 "bdev_io_cache_size": 256, 00:18:33.113 "bdev_auto_examine": true, 00:18:33.113 "iobuf_small_cache_size": 128, 00:18:33.113 "iobuf_large_cache_size": 16 00:18:33.113 } 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "method": "bdev_raid_set_options", 00:18:33.113 "params": { 00:18:33.113 "process_window_size_kb": 1024, 00:18:33.113 "process_max_bandwidth_mb_sec": 0 00:18:33.113 } 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "method": "bdev_iscsi_set_options", 00:18:33.113 "params": { 00:18:33.113 "timeout_sec": 30 00:18:33.113 } 00:18:33.113 }, 00:18:33.113 { 00:18:33.113 "method": "bdev_nvme_set_options", 00:18:33.113 "params": { 00:18:33.113 "action_on_timeout": "none", 00:18:33.113 "timeout_us": 0, 00:18:33.113 "timeout_admin_us": 0, 00:18:33.113 "keep_alive_timeout_ms": 10000, 00:18:33.113 "arbitration_burst": 0, 00:18:33.113 "low_priority_weight": 0, 00:18:33.113 "medium_priority_weight": 0, 00:18:33.113 "high_priority_weight": 0, 00:18:33.113 "nvme_adminq_poll_period_us": 10000, 00:18:33.113 "nvme_ioq_poll_period_us": 0, 00:18:33.113 "io_queue_requests": 512, 00:18:33.113 "delay_cmd_submit": true, 00:18:33.113 "transport_retry_count": 4, 00:18:33.113 "bdev_retry_count": 3, 00:18:33.113 "transport_ack_timeout": 0, 00:18:33.113 "ctrlr_loss_timeout_sec": 0, 00:18:33.113 "reconnect_delay_sec": 0, 00:18:33.113 "fast_io_fail_timeout_sec": 0, 00:18:33.114 "disable_auto_failback": false, 00:18:33.114 "generate_uuids": false, 00:18:33.114 "transport_tos": 0, 00:18:33.114 "nvme_error_stat": false, 00:18:33.114 "rdma_srq_size": 0, 00:18:33.114 "io_path_stat": false, 00:18:33.114 "allow_accel_sequence": false, 00:18:33.114 "rdma_max_cq_size": 0, 00:18:33.114 "rdma_cm_event_timeout_ms": 0, 00:18:33.114 "dhchap_digests": [ 00:18:33.114 "sha256", 00:18:33.114 "sha384", 00:18:33.114 "sha512" 00:18:33.114 ], 00:18:33.114 "dhchap_dhgroups": [ 00:18:33.114 "null", 00:18:33.114 "ffdhe2048", 00:18:33.114 "ffdhe3072", 00:18:33.114 "ffdhe4096", 00:18:33.114 "ffdhe6144", 00:18:33.114 "ffdhe8192" 00:18:33.114 ] 00:18:33.114 } 00:18:33.114 }, 00:18:33.114 { 00:18:33.114 "method": "bdev_nvme_attach_controller", 00:18:33.114 "params": { 00:18:33.114 "name": "TLSTEST", 00:18:33.114 "trtype": "TCP", 00:18:33.114 "adrfam": "IPv4", 00:18:33.114 "traddr": "10.0.0.2", 00:18:33.114 "trsvcid": "4420", 00:18:33.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.114 "prchk_reftag": false, 00:18:33.114 "prchk_guard": false, 00:18:33.114 "ctrlr_loss_timeout_sec": 0, 00:18:33.114 "reconnect_delay_sec": 0, 00:18:33.114 "fast_io_fail_timeout_sec": 0, 00:18:33.114 "psk": "key0", 00:18:33.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.114 "hdgst": false, 00:18:33.114 "ddgst": false, 00:18:33.114 "multipath": "multipath" 00:18:33.114 } 00:18:33.114 }, 00:18:33.114 { 00:18:33.114 "method": "bdev_nvme_set_hotplug", 00:18:33.114 "params": { 00:18:33.114 "period_us": 100000, 00:18:33.114 "enable": false 00:18:33.114 } 00:18:33.114 }, 00:18:33.114 { 00:18:33.114 "method": "bdev_wait_for_examine" 00:18:33.114 } 00:18:33.114 ] 00:18:33.114 }, 00:18:33.114 { 00:18:33.114 "subsystem": "nbd", 00:18:33.114 "config": [] 00:18:33.114 } 00:18:33.114 ] 00:18:33.114 }' 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2933145 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2933145 ']' 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2933145 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2933145 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2933145' 00:18:33.114 killing process with pid 2933145 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2933145 00:18:33.114 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.114 00:18:33.114 Latency(us) 00:18:33.114 [2024-11-20T08:50:56.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.114 [2024-11-20T08:50:56.446Z] =================================================================================================================== 00:18:33.114 [2024-11-20T08:50:56.446Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.114 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2933145 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2932818 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2932818 ']' 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2932818 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2932818 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2932818' 00:18:33.375 killing process with pid 2932818 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2932818 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2932818 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.375 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:33.375 "subsystems": [ 00:18:33.375 { 00:18:33.375 "subsystem": "keyring", 00:18:33.375 "config": [ 00:18:33.375 { 00:18:33.375 "method": "keyring_file_add_key", 00:18:33.375 "params": { 00:18:33.375 "name": "key0", 00:18:33.375 "path": "/tmp/tmp.QqDxSaR1yt" 00:18:33.375 } 00:18:33.375 } 00:18:33.375 ] 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "subsystem": "iobuf", 00:18:33.375 "config": [ 00:18:33.375 { 00:18:33.375 "method": "iobuf_set_options", 00:18:33.375 "params": { 00:18:33.375 "small_pool_count": 8192, 00:18:33.375 "large_pool_count": 1024, 00:18:33.375 "small_bufsize": 8192, 00:18:33.375 "large_bufsize": 135168, 00:18:33.375 "enable_numa": false 00:18:33.375 } 00:18:33.375 } 00:18:33.375 ] 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "subsystem": "sock", 00:18:33.375 "config": [ 00:18:33.375 { 00:18:33.375 "method": "sock_set_default_impl", 00:18:33.375 "params": { 00:18:33.375 "impl_name": "posix" 00:18:33.375 } 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "method": "sock_impl_set_options", 00:18:33.375 "params": { 00:18:33.375 "impl_name": "ssl", 00:18:33.375 "recv_buf_size": 4096, 00:18:33.375 "send_buf_size": 4096, 00:18:33.375 "enable_recv_pipe": true, 00:18:33.375 "enable_quickack": false, 00:18:33.375 "enable_placement_id": 0, 00:18:33.375 "enable_zerocopy_send_server": true, 00:18:33.375 "enable_zerocopy_send_client": false, 00:18:33.375 "zerocopy_threshold": 0, 00:18:33.375 "tls_version": 0, 00:18:33.375 "enable_ktls": false 00:18:33.375 } 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "method": "sock_impl_set_options", 00:18:33.375 "params": { 00:18:33.375 "impl_name": "posix", 00:18:33.375 "recv_buf_size": 2097152, 00:18:33.375 "send_buf_size": 2097152, 00:18:33.375 "enable_recv_pipe": true, 00:18:33.375 "enable_quickack": false, 00:18:33.375 "enable_placement_id": 0, 00:18:33.375 "enable_zerocopy_send_server": true, 00:18:33.375 "enable_zerocopy_send_client": false, 00:18:33.375 "zerocopy_threshold": 0, 00:18:33.375 "tls_version": 0, 00:18:33.375 "enable_ktls": false 00:18:33.375 } 00:18:33.375 } 00:18:33.375 ] 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "subsystem": "vmd", 00:18:33.375 "config": [] 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "subsystem": "accel", 00:18:33.375 "config": [ 00:18:33.375 { 00:18:33.375 "method": "accel_set_options", 00:18:33.375 "params": { 00:18:33.375 "small_cache_size": 128, 00:18:33.375 "large_cache_size": 16, 00:18:33.375 "task_count": 2048, 00:18:33.375 "sequence_count": 2048, 00:18:33.375 "buf_count": 2048 00:18:33.375 } 00:18:33.375 } 00:18:33.375 ] 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "subsystem": "bdev", 00:18:33.375 "config": [ 00:18:33.375 { 00:18:33.375 "method": "bdev_set_options", 00:18:33.375 "params": { 00:18:33.375 "bdev_io_pool_size": 65535, 00:18:33.375 "bdev_io_cache_size": 256, 00:18:33.375 "bdev_auto_examine": true, 00:18:33.375 "iobuf_small_cache_size": 128, 00:18:33.375 "iobuf_large_cache_size": 16 00:18:33.375 } 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "method": "bdev_raid_set_options", 00:18:33.375 "params": { 00:18:33.375 "process_window_size_kb": 1024, 00:18:33.375 "process_max_bandwidth_mb_sec": 0 00:18:33.375 } 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "method": "bdev_iscsi_set_options", 00:18:33.375 "params": { 00:18:33.375 "timeout_sec": 30 00:18:33.375 } 00:18:33.375 }, 00:18:33.375 { 00:18:33.375 "method": "bdev_nvme_set_options", 00:18:33.375 "params": { 00:18:33.375 "action_on_timeout": "none", 00:18:33.375 "timeout_us": 0, 00:18:33.375 "timeout_admin_us": 0, 00:18:33.375 "keep_alive_timeout_ms": 10000, 00:18:33.375 "arbitration_burst": 0, 00:18:33.375 "low_priority_weight": 0, 00:18:33.375 "medium_priority_weight": 0, 00:18:33.375 "high_priority_weight": 0, 00:18:33.375 "nvme_adminq_poll_period_us": 10000, 00:18:33.375 "nvme_ioq_poll_period_us": 0, 00:18:33.375 "io_queue_requests": 0, 00:18:33.375 "delay_cmd_submit": true, 00:18:33.375 "transport_retry_count": 4, 00:18:33.375 "bdev_retry_count": 3, 00:18:33.375 "transport_ack_timeout": 0, 00:18:33.375 "ctrlr_loss_timeout_sec": 0, 00:18:33.375 "reconnect_delay_sec": 0, 00:18:33.375 "fast_io_fail_timeout_sec": 0, 00:18:33.375 "disable_auto_failback": false, 00:18:33.375 "generate_uuids": false, 00:18:33.375 "transport_tos": 0, 00:18:33.375 "nvme_error_stat": false, 00:18:33.375 "rdma_srq_size": 0, 00:18:33.375 "io_path_stat": false, 00:18:33.375 "allow_accel_sequence": false, 00:18:33.375 "rdma_max_cq_size": 0, 00:18:33.375 "rdma_cm_event_timeout_ms": 0, 00:18:33.375 "dhchap_digests": [ 00:18:33.375 "sha256", 00:18:33.376 "sha384", 00:18:33.376 "sha512" 00:18:33.376 ], 00:18:33.376 "dhchap_dhgroups": [ 00:18:33.376 "null", 00:18:33.376 "ffdhe2048", 00:18:33.376 "ffdhe3072", 00:18:33.376 "ffdhe4096", 00:18:33.376 "ffdhe6144", 00:18:33.376 "ffdhe8192" 00:18:33.376 ] 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "bdev_nvme_set_hotplug", 00:18:33.376 "params": { 00:18:33.376 "period_us": 100000, 00:18:33.376 "enable": false 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "bdev_malloc_create", 00:18:33.376 "params": { 00:18:33.376 "name": "malloc0", 00:18:33.376 "num_blocks": 8192, 00:18:33.376 "block_size": 4096, 00:18:33.376 "physical_block_size": 4096, 00:18:33.376 "uuid": "92d58ec4-cb83-4b86-96a4-ef81c560ff92", 00:18:33.376 "optimal_io_boundary": 0, 00:18:33.376 "md_size": 0, 00:18:33.376 "dif_type": 0, 00:18:33.376 "dif_is_head_of_md": false, 00:18:33.376 "dif_pi_format": 0 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "bdev_wait_for_examine" 00:18:33.376 } 00:18:33.376 ] 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "subsystem": "nbd", 00:18:33.376 "config": [] 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "subsystem": "scheduler", 00:18:33.376 "config": [ 00:18:33.376 { 00:18:33.376 "method": "framework_set_scheduler", 00:18:33.376 "params": { 00:18:33.376 "name": "static" 00:18:33.376 } 00:18:33.376 } 00:18:33.376 ] 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "subsystem": "nvmf", 00:18:33.376 "config": [ 00:18:33.376 { 00:18:33.376 "method": "nvmf_set_config", 00:18:33.376 "params": { 00:18:33.376 "discovery_filter": "match_any", 00:18:33.376 "admin_cmd_passthru": { 00:18:33.376 "identify_ctrlr": false 00:18:33.376 }, 00:18:33.376 "dhchap_digests": [ 00:18:33.376 "sha256", 00:18:33.376 "sha384", 00:18:33.376 "sha512" 00:18:33.376 ], 00:18:33.376 "dhchap_dhgroups": [ 00:18:33.376 "null", 00:18:33.376 "ffdhe2048", 00:18:33.376 "ffdhe3072", 00:18:33.376 "ffdhe4096", 00:18:33.376 "ffdhe6144", 00:18:33.376 "ffdhe8192" 00:18:33.376 ] 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "nvmf_set_max_subsystems", 00:18:33.376 "params": { 00:18:33.376 "max_subsystems": 1024 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "nvmf_set_crdt", 00:18:33.376 "params": { 00:18:33.376 "crdt1": 0, 00:18:33.376 "crdt2": 0, 00:18:33.376 "crdt3": 0 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "nvmf_create_transport", 00:18:33.376 "params": { 00:18:33.376 "trtype": "TCP", 00:18:33.376 "max_queue_depth": 128, 00:18:33.376 "max_io_qpairs_per_ctrlr": 127, 00:18:33.376 "in_capsule_data_size": 4096, 00:18:33.376 "max_io_size": 131072, 00:18:33.376 "io_unit_size": 131072, 00:18:33.376 "max_aq_depth": 128, 00:18:33.376 "num_shared_buffers": 511, 00:18:33.376 "buf_cache_size": 4294967295, 00:18:33.376 "dif_insert_or_strip": false, 00:18:33.376 "zcopy": false, 00:18:33.376 "c2h_success": false, 00:18:33.376 "sock_priority": 0, 00:18:33.376 "abort_timeout_sec": 1, 00:18:33.376 "ack_timeout": 0, 00:18:33.376 "data_wr_pool_size": 0 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "nvmf_create_subsystem", 00:18:33.376 "params": { 00:18:33.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.376 "allow_any_host": false, 00:18:33.376 "serial_number": "SPDK00000000000001", 00:18:33.376 "model_number": "SPDK bdev Controller", 00:18:33.376 "max_namespaces": 10, 00:18:33.376 "min_cntlid": 1, 00:18:33.376 "max_cntlid": 65519, 00:18:33.376 "ana_reporting": false 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "nvmf_subsystem_add_host", 00:18:33.376 "params": { 00:18:33.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.376 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.376 "psk": "key0" 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "nvmf_subsystem_add_ns", 00:18:33.376 "params": { 00:18:33.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.376 "namespace": { 00:18:33.376 "nsid": 1, 00:18:33.376 "bdev_name": "malloc0", 00:18:33.376 "nguid": "92D58EC4CB834B8696A4EF81C560FF92", 00:18:33.376 "uuid": "92d58ec4-cb83-4b86-96a4-ef81c560ff92", 00:18:33.376 "no_auto_visible": false 00:18:33.376 } 00:18:33.376 } 00:18:33.376 }, 00:18:33.376 { 00:18:33.376 "method": "nvmf_subsystem_add_listener", 00:18:33.376 "params": { 00:18:33.376 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.376 "listen_address": { 00:18:33.376 "trtype": "TCP", 00:18:33.376 "adrfam": "IPv4", 00:18:33.376 "traddr": "10.0.0.2", 00:18:33.376 "trsvcid": "4420" 00:18:33.376 }, 00:18:33.376 "secure_channel": true 00:18:33.376 } 00:18:33.376 } 00:18:33.376 ] 00:18:33.376 } 00:18:33.376 ] 00:18:33.376 }' 00:18:33.376 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2933535 00:18:33.376 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:33.376 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2933535 00:18:33.376 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2933535 ']' 00:18:33.376 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.376 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.377 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.377 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.377 09:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.636 [2024-11-20 09:50:56.723175] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:33.636 [2024-11-20 09:50:56.723226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.636 [2024-11-20 09:50:56.803184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.636 [2024-11-20 09:50:56.841018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.636 [2024-11-20 09:50:56.841054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.636 [2024-11-20 09:50:56.841061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.636 [2024-11-20 09:50:56.841066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.636 [2024-11-20 09:50:56.841071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.636 [2024-11-20 09:50:56.841671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.893 [2024-11-20 09:50:57.053155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.893 [2024-11-20 09:50:57.085180] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.893 [2024-11-20 09:50:57.085376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2933570 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2933570 /var/tmp/bdevperf.sock 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2933570 ']' 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.460 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:34.460 "subsystems": [ 00:18:34.460 { 00:18:34.460 "subsystem": "keyring", 00:18:34.460 "config": [ 00:18:34.460 { 00:18:34.460 "method": "keyring_file_add_key", 00:18:34.460 "params": { 00:18:34.460 "name": "key0", 00:18:34.460 "path": "/tmp/tmp.QqDxSaR1yt" 00:18:34.460 } 00:18:34.460 } 00:18:34.460 ] 00:18:34.460 }, 00:18:34.460 { 00:18:34.460 "subsystem": "iobuf", 00:18:34.460 "config": [ 00:18:34.460 { 00:18:34.460 "method": "iobuf_set_options", 00:18:34.460 "params": { 00:18:34.460 "small_pool_count": 8192, 00:18:34.460 "large_pool_count": 1024, 00:18:34.460 "small_bufsize": 8192, 00:18:34.460 "large_bufsize": 135168, 00:18:34.460 "enable_numa": false 00:18:34.460 } 00:18:34.460 } 00:18:34.460 ] 00:18:34.460 }, 00:18:34.460 { 00:18:34.460 "subsystem": "sock", 00:18:34.460 "config": [ 00:18:34.460 { 00:18:34.460 "method": "sock_set_default_impl", 00:18:34.460 "params": { 00:18:34.460 "impl_name": "posix" 00:18:34.460 } 00:18:34.460 }, 00:18:34.460 { 00:18:34.460 "method": "sock_impl_set_options", 00:18:34.460 "params": { 00:18:34.460 "impl_name": "ssl", 00:18:34.460 "recv_buf_size": 4096, 00:18:34.460 "send_buf_size": 4096, 00:18:34.460 "enable_recv_pipe": true, 00:18:34.460 "enable_quickack": false, 00:18:34.460 "enable_placement_id": 0, 00:18:34.460 "enable_zerocopy_send_server": true, 00:18:34.460 "enable_zerocopy_send_client": false, 00:18:34.460 "zerocopy_threshold": 0, 00:18:34.460 "tls_version": 0, 00:18:34.460 "enable_ktls": false 00:18:34.460 } 00:18:34.460 }, 00:18:34.460 { 00:18:34.460 "method": "sock_impl_set_options", 00:18:34.460 "params": { 00:18:34.460 "impl_name": "posix", 00:18:34.460 "recv_buf_size": 2097152, 00:18:34.460 "send_buf_size": 2097152, 00:18:34.460 "enable_recv_pipe": true, 00:18:34.460 "enable_quickack": false, 00:18:34.460 "enable_placement_id": 0, 00:18:34.460 "enable_zerocopy_send_server": true, 00:18:34.460 "enable_zerocopy_send_client": false, 00:18:34.460 "zerocopy_threshold": 0, 00:18:34.460 "tls_version": 0, 00:18:34.460 "enable_ktls": false 00:18:34.460 } 00:18:34.460 } 00:18:34.460 ] 00:18:34.460 }, 00:18:34.460 { 00:18:34.460 "subsystem": "vmd", 00:18:34.460 "config": [] 00:18:34.460 }, 00:18:34.460 { 00:18:34.460 "subsystem": "accel", 00:18:34.460 "config": [ 00:18:34.460 { 00:18:34.460 "method": "accel_set_options", 00:18:34.460 "params": { 00:18:34.460 "small_cache_size": 128, 00:18:34.460 "large_cache_size": 16, 00:18:34.460 "task_count": 2048, 00:18:34.460 "sequence_count": 2048, 00:18:34.460 "buf_count": 2048 00:18:34.460 } 00:18:34.460 } 00:18:34.460 ] 00:18:34.460 }, 00:18:34.460 { 00:18:34.460 "subsystem": "bdev", 00:18:34.460 "config": [ 00:18:34.460 { 00:18:34.460 "method": "bdev_set_options", 00:18:34.460 "params": { 00:18:34.460 "bdev_io_pool_size": 65535, 00:18:34.460 "bdev_io_cache_size": 256, 00:18:34.460 "bdev_auto_examine": true, 00:18:34.460 "iobuf_small_cache_size": 128, 00:18:34.460 "iobuf_large_cache_size": 16 00:18:34.460 } 00:18:34.460 }, 00:18:34.460 { 00:18:34.460 "method": "bdev_raid_set_options", 00:18:34.460 "params": { 00:18:34.460 "process_window_size_kb": 1024, 00:18:34.460 "process_max_bandwidth_mb_sec": 0 00:18:34.460 } 00:18:34.460 }, 00:18:34.460 { 00:18:34.460 "method": "bdev_iscsi_set_options", 00:18:34.460 "params": { 00:18:34.460 "timeout_sec": 30 00:18:34.461 } 00:18:34.461 }, 00:18:34.461 { 00:18:34.461 "method": "bdev_nvme_set_options", 00:18:34.461 "params": { 00:18:34.461 "action_on_timeout": "none", 00:18:34.461 "timeout_us": 0, 00:18:34.461 "timeout_admin_us": 0, 00:18:34.461 "keep_alive_timeout_ms": 10000, 00:18:34.461 "arbitration_burst": 0, 00:18:34.461 "low_priority_weight": 0, 00:18:34.461 "medium_priority_weight": 0, 00:18:34.461 "high_priority_weight": 0, 00:18:34.461 "nvme_adminq_poll_period_us": 10000, 00:18:34.461 "nvme_ioq_poll_period_us": 0, 00:18:34.461 "io_queue_requests": 512, 00:18:34.461 "delay_cmd_submit": true, 00:18:34.461 "transport_retry_count": 4, 00:18:34.461 "bdev_retry_count": 3, 00:18:34.461 "transport_ack_timeout": 0, 00:18:34.461 "ctrlr_loss_timeout_sec": 0, 00:18:34.461 "reconnect_delay_sec": 0, 00:18:34.461 "fast_io_fail_timeout_sec": 0, 00:18:34.461 "disable_auto_failback": false, 00:18:34.461 "generate_uuids": false, 00:18:34.461 "transport_tos": 0, 00:18:34.461 "nvme_error_stat": false, 00:18:34.461 "rdma_srq_size": 0, 00:18:34.461 "io_path_stat": false, 00:18:34.461 "allow_accel_sequence": false, 00:18:34.461 "rdma_max_cq_size": 0, 00:18:34.461 "rdma_cm_event_timeout_ms": 0, 00:18:34.461 "dhchap_digests": [ 00:18:34.461 "sha256", 00:18:34.461 "sha384", 00:18:34.461 "sha512" 00:18:34.461 ], 00:18:34.461 "dhchap_dhgroups": [ 00:18:34.461 "null", 00:18:34.461 "ffdhe2048", 00:18:34.461 "ffdhe3072", 00:18:34.461 "ffdhe4096", 00:18:34.461 "ffdhe6144", 00:18:34.461 "ffdhe8192" 00:18:34.461 ] 00:18:34.461 } 00:18:34.461 }, 00:18:34.461 { 00:18:34.461 "method": "bdev_nvme_attach_controller", 00:18:34.461 "params": { 00:18:34.461 "name": "TLSTEST", 00:18:34.461 "trtype": "TCP", 00:18:34.461 "adrfam": "IPv4", 00:18:34.461 "traddr": "10.0.0.2", 00:18:34.461 "trsvcid": "4420", 00:18:34.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.461 "prchk_reftag": false, 00:18:34.461 "prchk_guard": false, 00:18:34.461 "ctrlr_loss_timeout_sec": 0, 00:18:34.461 "reconnect_delay_sec": 0, 00:18:34.461 "fast_io_fail_timeout_sec": 0, 00:18:34.461 "psk": "key0", 00:18:34.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.461 "hdgst": false, 00:18:34.461 "ddgst": false, 00:18:34.461 "multipath": "multipath" 00:18:34.461 } 00:18:34.461 }, 00:18:34.461 { 00:18:34.461 "method": "bdev_nvme_set_hotplug", 00:18:34.461 "params": { 00:18:34.461 "period_us": 100000, 00:18:34.461 "enable": false 00:18:34.461 } 00:18:34.461 }, 00:18:34.461 { 00:18:34.461 "method": "bdev_wait_for_examine" 00:18:34.461 } 00:18:34.461 ] 00:18:34.461 }, 00:18:34.461 { 00:18:34.461 "subsystem": "nbd", 00:18:34.461 "config": [] 00:18:34.461 } 00:18:34.461 ] 00:18:34.461 }' 00:18:34.461 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.461 09:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.461 [2024-11-20 09:50:57.650043] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:34.461 [2024-11-20 09:50:57.650090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2933570 ] 00:18:34.461 [2024-11-20 09:50:57.724718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.461 [2024-11-20 09:50:57.765417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.720 [2024-11-20 09:50:57.918386] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.288 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.288 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.288 09:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:35.288 Running I/O for 10 seconds... 00:18:37.599 5347.00 IOPS, 20.89 MiB/s [2024-11-20T08:51:01.866Z] 5362.00 IOPS, 20.95 MiB/s [2024-11-20T08:51:02.801Z] 5397.00 IOPS, 21.08 MiB/s [2024-11-20T08:51:03.737Z] 5426.50 IOPS, 21.20 MiB/s [2024-11-20T08:51:04.671Z] 5394.60 IOPS, 21.07 MiB/s [2024-11-20T08:51:06.047Z] 5420.67 IOPS, 21.17 MiB/s [2024-11-20T08:51:06.981Z] 5427.14 IOPS, 21.20 MiB/s [2024-11-20T08:51:07.915Z] 5436.25 IOPS, 21.24 MiB/s [2024-11-20T08:51:08.848Z] 5431.67 IOPS, 21.22 MiB/s [2024-11-20T08:51:08.848Z] 5436.00 IOPS, 21.23 MiB/s 00:18:45.516 Latency(us) 00:18:45.516 [2024-11-20T08:51:08.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.516 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:45.516 Verification LBA range: start 0x0 length 0x2000 00:18:45.516 TLSTESTn1 : 10.02 5438.93 21.25 0.00 0.00 23497.38 6411.13 23023.08 00:18:45.516 [2024-11-20T08:51:08.848Z] =================================================================================================================== 00:18:45.516 [2024-11-20T08:51:08.848Z] Total : 5438.93 21.25 0.00 0.00 23497.38 6411.13 23023.08 00:18:45.516 { 00:18:45.516 "results": [ 00:18:45.516 { 00:18:45.516 "job": "TLSTESTn1", 00:18:45.516 "core_mask": "0x4", 00:18:45.516 "workload": "verify", 00:18:45.516 "status": "finished", 00:18:45.516 "verify_range": { 00:18:45.516 "start": 0, 00:18:45.516 "length": 8192 00:18:45.516 }, 00:18:45.516 "queue_depth": 128, 00:18:45.516 "io_size": 4096, 00:18:45.516 "runtime": 10.017777, 00:18:45.516 "iops": 5438.931211984455, 00:18:45.516 "mibps": 21.24582504681428, 00:18:45.516 "io_failed": 0, 00:18:45.516 "io_timeout": 0, 00:18:45.516 "avg_latency_us": 23497.37914913923, 00:18:45.516 "min_latency_us": 6411.130434782609, 00:18:45.516 "max_latency_us": 23023.081739130434 00:18:45.516 } 00:18:45.516 ], 00:18:45.516 "core_count": 1 00:18:45.516 } 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2933570 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2933570 ']' 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2933570 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2933570 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2933570' 00:18:45.516 killing process with pid 2933570 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2933570 00:18:45.516 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.516 00:18:45.516 Latency(us) 00:18:45.516 [2024-11-20T08:51:08.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.516 [2024-11-20T08:51:08.848Z] =================================================================================================================== 00:18:45.516 [2024-11-20T08:51:08.848Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.516 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2933570 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2933535 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2933535 ']' 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2933535 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2933535 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2933535' 00:18:45.774 killing process with pid 2933535 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2933535 00:18:45.774 09:51:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2933535 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2935706 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2935706 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2935706 ']' 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.774 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.033 [2024-11-20 09:51:09.149824] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:46.033 [2024-11-20 09:51:09.149877] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.033 [2024-11-20 09:51:09.229698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.033 [2024-11-20 09:51:09.271194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.033 [2024-11-20 09:51:09.271229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.033 [2024-11-20 09:51:09.271236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.033 [2024-11-20 09:51:09.271242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.033 [2024-11-20 09:51:09.271247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.033 [2024-11-20 09:51:09.271826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.QqDxSaR1yt 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QqDxSaR1yt 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:46.292 [2024-11-20 09:51:09.588830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.292 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:46.551 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:46.810 [2024-11-20 09:51:09.953765] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.810 [2024-11-20 09:51:09.953980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.810 09:51:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:47.069 malloc0 00:18:47.069 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:47.069 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QqDxSaR1yt 00:18:47.328 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2936300 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2936300 /var/tmp/bdevperf.sock 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2936300 ']' 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.588 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.588 [2024-11-20 09:51:10.783660] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:47.588 [2024-11-20 09:51:10.783709] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936300 ] 00:18:47.588 [2024-11-20 09:51:10.860142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.588 [2024-11-20 09:51:10.902708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.847 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:47.847 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:47.847 09:51:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QqDxSaR1yt 00:18:48.106 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:48.106 [2024-11-20 09:51:11.359295] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.106 nvme0n1 00:18:48.422 09:51:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.422 Running I/O for 1 seconds... 00:18:49.441 5196.00 IOPS, 20.30 MiB/s 00:18:49.441 Latency(us) 00:18:49.441 [2024-11-20T08:51:12.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.441 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:49.441 Verification LBA range: start 0x0 length 0x2000 00:18:49.441 nvme0n1 : 1.01 5259.84 20.55 0.00 0.00 24174.74 4872.46 20857.54 00:18:49.441 [2024-11-20T08:51:12.773Z] =================================================================================================================== 00:18:49.441 [2024-11-20T08:51:12.773Z] Total : 5259.84 20.55 0.00 0.00 24174.74 4872.46 20857.54 00:18:49.441 { 00:18:49.441 "results": [ 00:18:49.442 { 00:18:49.442 "job": "nvme0n1", 00:18:49.442 "core_mask": "0x2", 00:18:49.442 "workload": "verify", 00:18:49.442 "status": "finished", 00:18:49.442 "verify_range": { 00:18:49.442 "start": 0, 00:18:49.442 "length": 8192 00:18:49.442 }, 00:18:49.442 "queue_depth": 128, 00:18:49.442 "io_size": 4096, 00:18:49.442 "runtime": 1.012389, 00:18:49.442 "iops": 5259.835893120135, 00:18:49.442 "mibps": 20.546233957500526, 00:18:49.442 "io_failed": 0, 00:18:49.442 "io_timeout": 0, 00:18:49.442 "avg_latency_us": 24174.741492712797, 00:18:49.442 "min_latency_us": 4872.459130434782, 00:18:49.442 "max_latency_us": 20857.544347826086 00:18:49.442 } 00:18:49.442 ], 00:18:49.442 "core_count": 1 00:18:49.442 } 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2936300 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2936300 ']' 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2936300 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2936300 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2936300' 00:18:49.442 killing process with pid 2936300 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2936300 00:18:49.442 Received shutdown signal, test time was about 1.000000 seconds 00:18:49.442 00:18:49.442 Latency(us) 00:18:49.442 [2024-11-20T08:51:12.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.442 [2024-11-20T08:51:12.774Z] =================================================================================================================== 00:18:49.442 [2024-11-20T08:51:12.774Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.442 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2936300 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2935706 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2935706 ']' 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2935706 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2935706 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2935706' 00:18:49.701 killing process with pid 2935706 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2935706 00:18:49.701 09:51:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2935706 00:18:49.701 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2936657 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2936657 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2936657 ']' 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.702 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.961 [2024-11-20 09:51:13.071273] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:49.961 [2024-11-20 09:51:13.071321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.961 [2024-11-20 09:51:13.151124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.961 [2024-11-20 09:51:13.187325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.961 [2024-11-20 09:51:13.187362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.961 [2024-11-20 09:51:13.187369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.961 [2024-11-20 09:51:13.187375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.961 [2024-11-20 09:51:13.187380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.961 [2024-11-20 09:51:13.187927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.220 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.221 [2024-11-20 09:51:13.335257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.221 malloc0 00:18:50.221 [2024-11-20 09:51:13.363366] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.221 [2024-11-20 09:51:13.363567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2936694 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2936694 /var/tmp/bdevperf.sock 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2936694 ']' 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.221 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.221 [2024-11-20 09:51:13.438644] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:50.221 [2024-11-20 09:51:13.438686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936694 ] 00:18:50.221 [2024-11-20 09:51:13.514325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.480 [2024-11-20 09:51:13.557425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.480 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.480 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.480 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QqDxSaR1yt 00:18:50.740 09:51:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:50.740 [2024-11-20 09:51:14.001012] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.740 nvme0n1 00:18:50.998 09:51:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.998 Running I/O for 1 seconds... 00:18:51.936 5330.00 IOPS, 20.82 MiB/s 00:18:51.936 Latency(us) 00:18:51.936 [2024-11-20T08:51:15.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.936 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.936 Verification LBA range: start 0x0 length 0x2000 00:18:51.936 nvme0n1 : 1.02 5328.40 20.81 0.00 0.00 23799.68 6183.18 35560.40 00:18:51.936 [2024-11-20T08:51:15.268Z] =================================================================================================================== 00:18:51.936 [2024-11-20T08:51:15.268Z] Total : 5328.40 20.81 0.00 0.00 23799.68 6183.18 35560.40 00:18:51.936 { 00:18:51.936 "results": [ 00:18:51.936 { 00:18:51.936 "job": "nvme0n1", 00:18:51.936 "core_mask": "0x2", 00:18:51.936 "workload": "verify", 00:18:51.936 "status": "finished", 00:18:51.936 "verify_range": { 00:18:51.936 "start": 0, 00:18:51.936 "length": 8192 00:18:51.936 }, 00:18:51.936 "queue_depth": 128, 00:18:51.936 "io_size": 4096, 00:18:51.936 "runtime": 1.024511, 00:18:51.936 "iops": 5328.395693164836, 00:18:51.936 "mibps": 20.814045676425142, 00:18:51.936 "io_failed": 0, 00:18:51.936 "io_timeout": 0, 00:18:51.936 "avg_latency_us": 23799.677884626108, 00:18:51.936 "min_latency_us": 6183.179130434783, 00:18:51.936 "max_latency_us": 35560.40347826087 00:18:51.936 } 00:18:51.936 ], 00:18:51.936 "core_count": 1 00:18:51.936 } 00:18:51.936 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:51.936 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.936 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.196 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.196 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:52.196 "subsystems": [ 00:18:52.196 { 00:18:52.196 "subsystem": "keyring", 00:18:52.196 "config": [ 00:18:52.197 { 00:18:52.197 "method": "keyring_file_add_key", 00:18:52.197 "params": { 00:18:52.197 "name": "key0", 00:18:52.197 "path": "/tmp/tmp.QqDxSaR1yt" 00:18:52.197 } 00:18:52.197 } 00:18:52.197 ] 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "subsystem": "iobuf", 00:18:52.197 "config": [ 00:18:52.197 { 00:18:52.197 "method": "iobuf_set_options", 00:18:52.197 "params": { 00:18:52.197 "small_pool_count": 8192, 00:18:52.197 "large_pool_count": 1024, 00:18:52.197 "small_bufsize": 8192, 00:18:52.197 "large_bufsize": 135168, 00:18:52.197 "enable_numa": false 00:18:52.197 } 00:18:52.197 } 00:18:52.197 ] 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "subsystem": "sock", 00:18:52.197 "config": [ 00:18:52.197 { 00:18:52.197 "method": "sock_set_default_impl", 00:18:52.197 "params": { 00:18:52.197 "impl_name": "posix" 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "sock_impl_set_options", 00:18:52.197 "params": { 00:18:52.197 "impl_name": "ssl", 00:18:52.197 "recv_buf_size": 4096, 00:18:52.197 "send_buf_size": 4096, 00:18:52.197 "enable_recv_pipe": true, 00:18:52.197 "enable_quickack": false, 00:18:52.197 "enable_placement_id": 0, 00:18:52.197 "enable_zerocopy_send_server": true, 00:18:52.197 "enable_zerocopy_send_client": false, 00:18:52.197 "zerocopy_threshold": 0, 00:18:52.197 "tls_version": 0, 00:18:52.197 "enable_ktls": false 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "sock_impl_set_options", 00:18:52.197 "params": { 00:18:52.197 "impl_name": "posix", 00:18:52.197 "recv_buf_size": 2097152, 00:18:52.197 "send_buf_size": 2097152, 00:18:52.197 "enable_recv_pipe": true, 00:18:52.197 "enable_quickack": false, 00:18:52.197 "enable_placement_id": 0, 00:18:52.197 "enable_zerocopy_send_server": true, 00:18:52.197 "enable_zerocopy_send_client": false, 00:18:52.197 "zerocopy_threshold": 0, 00:18:52.197 "tls_version": 0, 00:18:52.197 "enable_ktls": false 00:18:52.197 } 00:18:52.197 } 00:18:52.197 ] 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "subsystem": "vmd", 00:18:52.197 "config": [] 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "subsystem": "accel", 00:18:52.197 "config": [ 00:18:52.197 { 00:18:52.197 "method": "accel_set_options", 00:18:52.197 "params": { 00:18:52.197 "small_cache_size": 128, 00:18:52.197 "large_cache_size": 16, 00:18:52.197 "task_count": 2048, 00:18:52.197 "sequence_count": 2048, 00:18:52.197 "buf_count": 2048 00:18:52.197 } 00:18:52.197 } 00:18:52.197 ] 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "subsystem": "bdev", 00:18:52.197 "config": [ 00:18:52.197 { 00:18:52.197 "method": "bdev_set_options", 00:18:52.197 "params": { 00:18:52.197 "bdev_io_pool_size": 65535, 00:18:52.197 "bdev_io_cache_size": 256, 00:18:52.197 "bdev_auto_examine": true, 00:18:52.197 "iobuf_small_cache_size": 128, 00:18:52.197 "iobuf_large_cache_size": 16 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "bdev_raid_set_options", 00:18:52.197 "params": { 00:18:52.197 "process_window_size_kb": 1024, 00:18:52.197 "process_max_bandwidth_mb_sec": 0 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "bdev_iscsi_set_options", 00:18:52.197 "params": { 00:18:52.197 "timeout_sec": 30 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "bdev_nvme_set_options", 00:18:52.197 "params": { 00:18:52.197 "action_on_timeout": "none", 00:18:52.197 "timeout_us": 0, 00:18:52.197 "timeout_admin_us": 0, 00:18:52.197 "keep_alive_timeout_ms": 10000, 00:18:52.197 "arbitration_burst": 0, 00:18:52.197 "low_priority_weight": 0, 00:18:52.197 "medium_priority_weight": 0, 00:18:52.197 "high_priority_weight": 0, 00:18:52.197 "nvme_adminq_poll_period_us": 10000, 00:18:52.197 "nvme_ioq_poll_period_us": 0, 00:18:52.197 "io_queue_requests": 0, 00:18:52.197 "delay_cmd_submit": true, 00:18:52.197 "transport_retry_count": 4, 00:18:52.197 "bdev_retry_count": 3, 00:18:52.197 "transport_ack_timeout": 0, 00:18:52.197 "ctrlr_loss_timeout_sec": 0, 00:18:52.197 "reconnect_delay_sec": 0, 00:18:52.197 "fast_io_fail_timeout_sec": 0, 00:18:52.197 "disable_auto_failback": false, 00:18:52.197 "generate_uuids": false, 00:18:52.197 "transport_tos": 0, 00:18:52.197 "nvme_error_stat": false, 00:18:52.197 "rdma_srq_size": 0, 00:18:52.197 "io_path_stat": false, 00:18:52.197 "allow_accel_sequence": false, 00:18:52.197 "rdma_max_cq_size": 0, 00:18:52.197 "rdma_cm_event_timeout_ms": 0, 00:18:52.197 "dhchap_digests": [ 00:18:52.197 "sha256", 00:18:52.197 "sha384", 00:18:52.197 "sha512" 00:18:52.197 ], 00:18:52.197 "dhchap_dhgroups": [ 00:18:52.197 "null", 00:18:52.197 "ffdhe2048", 00:18:52.197 "ffdhe3072", 00:18:52.197 "ffdhe4096", 00:18:52.197 "ffdhe6144", 00:18:52.197 "ffdhe8192" 00:18:52.197 ] 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "bdev_nvme_set_hotplug", 00:18:52.197 "params": { 00:18:52.197 "period_us": 100000, 00:18:52.197 "enable": false 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "bdev_malloc_create", 00:18:52.197 "params": { 00:18:52.197 "name": "malloc0", 00:18:52.197 "num_blocks": 8192, 00:18:52.197 "block_size": 4096, 00:18:52.197 "physical_block_size": 4096, 00:18:52.197 "uuid": "bad4ef06-836b-40f5-b10c-1f2df5c2e253", 00:18:52.197 "optimal_io_boundary": 0, 00:18:52.197 "md_size": 0, 00:18:52.197 "dif_type": 0, 00:18:52.197 "dif_is_head_of_md": false, 00:18:52.197 "dif_pi_format": 0 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "bdev_wait_for_examine" 00:18:52.197 } 00:18:52.197 ] 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "subsystem": "nbd", 00:18:52.197 "config": [] 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "subsystem": "scheduler", 00:18:52.197 "config": [ 00:18:52.197 { 00:18:52.197 "method": "framework_set_scheduler", 00:18:52.197 "params": { 00:18:52.197 "name": "static" 00:18:52.197 } 00:18:52.197 } 00:18:52.197 ] 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "subsystem": "nvmf", 00:18:52.197 "config": [ 00:18:52.197 { 00:18:52.197 "method": "nvmf_set_config", 00:18:52.197 "params": { 00:18:52.197 "discovery_filter": "match_any", 00:18:52.197 "admin_cmd_passthru": { 00:18:52.197 "identify_ctrlr": false 00:18:52.197 }, 00:18:52.197 "dhchap_digests": [ 00:18:52.197 "sha256", 00:18:52.197 "sha384", 00:18:52.197 "sha512" 00:18:52.197 ], 00:18:52.197 "dhchap_dhgroups": [ 00:18:52.197 "null", 00:18:52.197 "ffdhe2048", 00:18:52.197 "ffdhe3072", 00:18:52.197 "ffdhe4096", 00:18:52.197 "ffdhe6144", 00:18:52.197 "ffdhe8192" 00:18:52.197 ] 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "nvmf_set_max_subsystems", 00:18:52.197 "params": { 00:18:52.197 "max_subsystems": 1024 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "nvmf_set_crdt", 00:18:52.197 "params": { 00:18:52.197 "crdt1": 0, 00:18:52.197 "crdt2": 0, 00:18:52.197 "crdt3": 0 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "nvmf_create_transport", 00:18:52.197 "params": { 00:18:52.197 "trtype": "TCP", 00:18:52.197 "max_queue_depth": 128, 00:18:52.197 "max_io_qpairs_per_ctrlr": 127, 00:18:52.197 "in_capsule_data_size": 4096, 00:18:52.197 "max_io_size": 131072, 00:18:52.197 "io_unit_size": 131072, 00:18:52.197 "max_aq_depth": 128, 00:18:52.197 "num_shared_buffers": 511, 00:18:52.197 "buf_cache_size": 4294967295, 00:18:52.197 "dif_insert_or_strip": false, 00:18:52.197 "zcopy": false, 00:18:52.197 "c2h_success": false, 00:18:52.197 "sock_priority": 0, 00:18:52.197 "abort_timeout_sec": 1, 00:18:52.197 "ack_timeout": 0, 00:18:52.197 "data_wr_pool_size": 0 00:18:52.197 } 00:18:52.197 }, 00:18:52.197 { 00:18:52.197 "method": "nvmf_create_subsystem", 00:18:52.197 "params": { 00:18:52.197 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.197 "allow_any_host": false, 00:18:52.197 "serial_number": "00000000000000000000", 00:18:52.197 "model_number": "SPDK bdev Controller", 00:18:52.197 "max_namespaces": 32, 00:18:52.197 "min_cntlid": 1, 00:18:52.197 "max_cntlid": 65519, 00:18:52.197 "ana_reporting": false 00:18:52.197 } 00:18:52.198 }, 00:18:52.198 { 00:18:52.198 "method": "nvmf_subsystem_add_host", 00:18:52.198 "params": { 00:18:52.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.198 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.198 "psk": "key0" 00:18:52.198 } 00:18:52.198 }, 00:18:52.198 { 00:18:52.198 "method": "nvmf_subsystem_add_ns", 00:18:52.198 "params": { 00:18:52.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.198 "namespace": { 00:18:52.198 "nsid": 1, 00:18:52.198 "bdev_name": "malloc0", 00:18:52.198 "nguid": "BAD4EF06836B40F5B10C1F2DF5C2E253", 00:18:52.198 "uuid": "bad4ef06-836b-40f5-b10c-1f2df5c2e253", 00:18:52.198 "no_auto_visible": false 00:18:52.198 } 00:18:52.198 } 00:18:52.198 }, 00:18:52.198 { 00:18:52.198 "method": "nvmf_subsystem_add_listener", 00:18:52.198 "params": { 00:18:52.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.198 "listen_address": { 00:18:52.198 "trtype": "TCP", 00:18:52.198 "adrfam": "IPv4", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "trsvcid": "4420" 00:18:52.198 }, 00:18:52.198 "secure_channel": false, 00:18:52.198 "sock_impl": "ssl" 00:18:52.198 } 00:18:52.198 } 00:18:52.198 ] 00:18:52.198 } 00:18:52.198 ] 00:18:52.198 }' 00:18:52.198 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:52.457 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:52.457 "subsystems": [ 00:18:52.457 { 00:18:52.457 "subsystem": "keyring", 00:18:52.457 "config": [ 00:18:52.457 { 00:18:52.457 "method": "keyring_file_add_key", 00:18:52.457 "params": { 00:18:52.457 "name": "key0", 00:18:52.457 "path": "/tmp/tmp.QqDxSaR1yt" 00:18:52.457 } 00:18:52.457 } 00:18:52.457 ] 00:18:52.457 }, 00:18:52.457 { 00:18:52.457 "subsystem": "iobuf", 00:18:52.457 "config": [ 00:18:52.457 { 00:18:52.457 "method": "iobuf_set_options", 00:18:52.457 "params": { 00:18:52.457 "small_pool_count": 8192, 00:18:52.457 "large_pool_count": 1024, 00:18:52.457 "small_bufsize": 8192, 00:18:52.457 "large_bufsize": 135168, 00:18:52.457 "enable_numa": false 00:18:52.457 } 00:18:52.457 } 00:18:52.457 ] 00:18:52.457 }, 00:18:52.457 { 00:18:52.457 "subsystem": "sock", 00:18:52.457 "config": [ 00:18:52.457 { 00:18:52.457 "method": "sock_set_default_impl", 00:18:52.457 "params": { 00:18:52.457 "impl_name": "posix" 00:18:52.457 } 00:18:52.457 }, 00:18:52.457 { 00:18:52.457 "method": "sock_impl_set_options", 00:18:52.457 "params": { 00:18:52.457 "impl_name": "ssl", 00:18:52.457 "recv_buf_size": 4096, 00:18:52.457 "send_buf_size": 4096, 00:18:52.457 "enable_recv_pipe": true, 00:18:52.457 "enable_quickack": false, 00:18:52.457 "enable_placement_id": 0, 00:18:52.457 "enable_zerocopy_send_server": true, 00:18:52.457 "enable_zerocopy_send_client": false, 00:18:52.457 "zerocopy_threshold": 0, 00:18:52.457 "tls_version": 0, 00:18:52.457 "enable_ktls": false 00:18:52.457 } 00:18:52.457 }, 00:18:52.457 { 00:18:52.457 "method": "sock_impl_set_options", 00:18:52.457 "params": { 00:18:52.457 "impl_name": "posix", 00:18:52.457 "recv_buf_size": 2097152, 00:18:52.457 "send_buf_size": 2097152, 00:18:52.457 "enable_recv_pipe": true, 00:18:52.457 "enable_quickack": false, 00:18:52.457 "enable_placement_id": 0, 00:18:52.457 "enable_zerocopy_send_server": true, 00:18:52.457 "enable_zerocopy_send_client": false, 00:18:52.457 "zerocopy_threshold": 0, 00:18:52.457 "tls_version": 0, 00:18:52.457 "enable_ktls": false 00:18:52.457 } 00:18:52.457 } 00:18:52.457 ] 00:18:52.457 }, 00:18:52.457 { 00:18:52.457 "subsystem": "vmd", 00:18:52.457 "config": [] 00:18:52.457 }, 00:18:52.457 { 00:18:52.457 "subsystem": "accel", 00:18:52.457 "config": [ 00:18:52.457 { 00:18:52.457 "method": "accel_set_options", 00:18:52.457 "params": { 00:18:52.457 "small_cache_size": 128, 00:18:52.457 "large_cache_size": 16, 00:18:52.457 "task_count": 2048, 00:18:52.457 "sequence_count": 2048, 00:18:52.457 "buf_count": 2048 00:18:52.457 } 00:18:52.457 } 00:18:52.457 ] 00:18:52.457 }, 00:18:52.457 { 00:18:52.457 "subsystem": "bdev", 00:18:52.457 "config": [ 00:18:52.457 { 00:18:52.457 "method": "bdev_set_options", 00:18:52.457 "params": { 00:18:52.457 "bdev_io_pool_size": 65535, 00:18:52.457 "bdev_io_cache_size": 256, 00:18:52.457 "bdev_auto_examine": true, 00:18:52.457 "iobuf_small_cache_size": 128, 00:18:52.457 "iobuf_large_cache_size": 16 00:18:52.457 } 00:18:52.457 }, 00:18:52.457 { 00:18:52.457 "method": "bdev_raid_set_options", 00:18:52.457 "params": { 00:18:52.457 "process_window_size_kb": 1024, 00:18:52.457 "process_max_bandwidth_mb_sec": 0 00:18:52.457 } 00:18:52.457 }, 00:18:52.457 { 00:18:52.457 "method": "bdev_iscsi_set_options", 00:18:52.457 "params": { 00:18:52.457 "timeout_sec": 30 00:18:52.457 } 00:18:52.458 }, 00:18:52.458 { 00:18:52.458 "method": "bdev_nvme_set_options", 00:18:52.458 "params": { 00:18:52.458 "action_on_timeout": "none", 00:18:52.458 "timeout_us": 0, 00:18:52.458 "timeout_admin_us": 0, 00:18:52.458 "keep_alive_timeout_ms": 10000, 00:18:52.458 "arbitration_burst": 0, 00:18:52.458 "low_priority_weight": 0, 00:18:52.458 "medium_priority_weight": 0, 00:18:52.458 "high_priority_weight": 0, 00:18:52.458 "nvme_adminq_poll_period_us": 10000, 00:18:52.458 "nvme_ioq_poll_period_us": 0, 00:18:52.458 "io_queue_requests": 512, 00:18:52.458 "delay_cmd_submit": true, 00:18:52.458 "transport_retry_count": 4, 00:18:52.458 "bdev_retry_count": 3, 00:18:52.458 "transport_ack_timeout": 0, 00:18:52.458 "ctrlr_loss_timeout_sec": 0, 00:18:52.458 "reconnect_delay_sec": 0, 00:18:52.458 "fast_io_fail_timeout_sec": 0, 00:18:52.458 "disable_auto_failback": false, 00:18:52.458 "generate_uuids": false, 00:18:52.458 "transport_tos": 0, 00:18:52.458 "nvme_error_stat": false, 00:18:52.458 "rdma_srq_size": 0, 00:18:52.458 "io_path_stat": false, 00:18:52.458 "allow_accel_sequence": false, 00:18:52.458 "rdma_max_cq_size": 0, 00:18:52.458 "rdma_cm_event_timeout_ms": 0, 00:18:52.458 "dhchap_digests": [ 00:18:52.458 "sha256", 00:18:52.458 "sha384", 00:18:52.458 "sha512" 00:18:52.458 ], 00:18:52.458 "dhchap_dhgroups": [ 00:18:52.458 "null", 00:18:52.458 "ffdhe2048", 00:18:52.458 "ffdhe3072", 00:18:52.458 "ffdhe4096", 00:18:52.458 "ffdhe6144", 00:18:52.458 "ffdhe8192" 00:18:52.458 ] 00:18:52.458 } 00:18:52.458 }, 00:18:52.458 { 00:18:52.458 "method": "bdev_nvme_attach_controller", 00:18:52.458 "params": { 00:18:52.458 "name": "nvme0", 00:18:52.458 "trtype": "TCP", 00:18:52.458 "adrfam": "IPv4", 00:18:52.458 "traddr": "10.0.0.2", 00:18:52.458 "trsvcid": "4420", 00:18:52.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.458 "prchk_reftag": false, 00:18:52.458 "prchk_guard": false, 00:18:52.458 "ctrlr_loss_timeout_sec": 0, 00:18:52.458 "reconnect_delay_sec": 0, 00:18:52.458 "fast_io_fail_timeout_sec": 0, 00:18:52.458 "psk": "key0", 00:18:52.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.458 "hdgst": false, 00:18:52.458 "ddgst": false, 00:18:52.458 "multipath": "multipath" 00:18:52.458 } 00:18:52.458 }, 00:18:52.458 { 00:18:52.458 "method": "bdev_nvme_set_hotplug", 00:18:52.458 "params": { 00:18:52.458 "period_us": 100000, 00:18:52.458 "enable": false 00:18:52.458 } 00:18:52.458 }, 00:18:52.458 { 00:18:52.458 "method": "bdev_enable_histogram", 00:18:52.458 "params": { 00:18:52.458 "name": "nvme0n1", 00:18:52.458 "enable": true 00:18:52.458 } 00:18:52.458 }, 00:18:52.458 { 00:18:52.458 "method": "bdev_wait_for_examine" 00:18:52.458 } 00:18:52.458 ] 00:18:52.458 }, 00:18:52.458 { 00:18:52.458 "subsystem": "nbd", 00:18:52.458 "config": [] 00:18:52.458 } 00:18:52.458 ] 00:18:52.458 }' 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2936694 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2936694 ']' 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2936694 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2936694 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2936694' 00:18:52.458 killing process with pid 2936694 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2936694 00:18:52.458 Received shutdown signal, test time was about 1.000000 seconds 00:18:52.458 00:18:52.458 Latency(us) 00:18:52.458 [2024-11-20T08:51:15.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.458 [2024-11-20T08:51:15.790Z] =================================================================================================================== 00:18:52.458 [2024-11-20T08:51:15.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.458 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2936694 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2936657 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2936657 ']' 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2936657 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2936657 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2936657' 00:18:52.718 killing process with pid 2936657 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2936657 00:18:52.718 09:51:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2936657 00:18:52.718 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:52.718 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.718 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.718 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:52.718 "subsystems": [ 00:18:52.718 { 00:18:52.718 "subsystem": "keyring", 00:18:52.718 "config": [ 00:18:52.718 { 00:18:52.718 "method": "keyring_file_add_key", 00:18:52.718 "params": { 00:18:52.718 "name": "key0", 00:18:52.718 "path": "/tmp/tmp.QqDxSaR1yt" 00:18:52.718 } 00:18:52.718 } 00:18:52.718 ] 00:18:52.718 }, 00:18:52.718 { 00:18:52.718 "subsystem": "iobuf", 00:18:52.718 "config": [ 00:18:52.718 { 00:18:52.718 "method": "iobuf_set_options", 00:18:52.718 "params": { 00:18:52.718 "small_pool_count": 8192, 00:18:52.718 "large_pool_count": 1024, 00:18:52.718 "small_bufsize": 8192, 00:18:52.718 "large_bufsize": 135168, 00:18:52.718 "enable_numa": false 00:18:52.718 } 00:18:52.718 } 00:18:52.718 ] 00:18:52.718 }, 00:18:52.718 { 00:18:52.718 "subsystem": "sock", 00:18:52.718 "config": [ 00:18:52.718 { 00:18:52.718 "method": "sock_set_default_impl", 00:18:52.718 "params": { 00:18:52.718 "impl_name": "posix" 00:18:52.718 } 00:18:52.718 }, 00:18:52.718 { 00:18:52.718 "method": "sock_impl_set_options", 00:18:52.718 "params": { 00:18:52.718 "impl_name": "ssl", 00:18:52.718 "recv_buf_size": 4096, 00:18:52.718 "send_buf_size": 4096, 00:18:52.718 "enable_recv_pipe": true, 00:18:52.718 "enable_quickack": false, 00:18:52.718 "enable_placement_id": 0, 00:18:52.718 "enable_zerocopy_send_server": true, 00:18:52.718 "enable_zerocopy_send_client": false, 00:18:52.718 "zerocopy_threshold": 0, 00:18:52.718 "tls_version": 0, 00:18:52.718 "enable_ktls": false 00:18:52.718 } 00:18:52.718 }, 00:18:52.718 { 00:18:52.718 "method": "sock_impl_set_options", 00:18:52.718 "params": { 00:18:52.718 "impl_name": "posix", 00:18:52.718 "recv_buf_size": 2097152, 00:18:52.718 "send_buf_size": 2097152, 00:18:52.718 "enable_recv_pipe": true, 00:18:52.718 "enable_quickack": false, 00:18:52.718 "enable_placement_id": 0, 00:18:52.718 "enable_zerocopy_send_server": true, 00:18:52.718 "enable_zerocopy_send_client": false, 00:18:52.718 "zerocopy_threshold": 0, 00:18:52.718 "tls_version": 0, 00:18:52.718 "enable_ktls": false 00:18:52.718 } 00:18:52.718 } 00:18:52.718 ] 00:18:52.718 }, 00:18:52.718 { 00:18:52.718 "subsystem": "vmd", 00:18:52.718 "config": [] 00:18:52.718 }, 00:18:52.718 { 00:18:52.718 "subsystem": "accel", 00:18:52.718 "config": [ 00:18:52.718 { 00:18:52.719 "method": "accel_set_options", 00:18:52.719 "params": { 00:18:52.719 "small_cache_size": 128, 00:18:52.719 "large_cache_size": 16, 00:18:52.719 "task_count": 2048, 00:18:52.719 "sequence_count": 2048, 00:18:52.719 "buf_count": 2048 00:18:52.719 } 00:18:52.719 } 00:18:52.719 ] 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "subsystem": "bdev", 00:18:52.719 "config": [ 00:18:52.719 { 00:18:52.719 "method": "bdev_set_options", 00:18:52.719 "params": { 00:18:52.719 "bdev_io_pool_size": 65535, 00:18:52.719 "bdev_io_cache_size": 256, 00:18:52.719 "bdev_auto_examine": true, 00:18:52.719 "iobuf_small_cache_size": 128, 00:18:52.719 "iobuf_large_cache_size": 16 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "bdev_raid_set_options", 00:18:52.719 "params": { 00:18:52.719 "process_window_size_kb": 1024, 00:18:52.719 "process_max_bandwidth_mb_sec": 0 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "bdev_iscsi_set_options", 00:18:52.719 "params": { 00:18:52.719 "timeout_sec": 30 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "bdev_nvme_set_options", 00:18:52.719 "params": { 00:18:52.719 "action_on_timeout": "none", 00:18:52.719 "timeout_us": 0, 00:18:52.719 "timeout_admin_us": 0, 00:18:52.719 "keep_alive_timeout_ms": 10000, 00:18:52.719 "arbitration_burst": 0, 00:18:52.719 "low_priority_weight": 0, 00:18:52.719 "medium_priority_weight": 0, 00:18:52.719 "high_priority_weight": 0, 00:18:52.719 "nvme_adminq_poll_period_us": 10000, 00:18:52.719 "nvme_ioq_poll_period_us": 0, 00:18:52.719 "io_queue_requests": 0, 00:18:52.719 "delay_cmd_submit": true, 00:18:52.719 "transport_retry_count": 4, 00:18:52.719 "bdev_retry_count": 3, 00:18:52.719 "transport_ack_timeout": 0, 00:18:52.719 "ctrlr_loss_timeout_sec": 0, 00:18:52.719 "reconnect_delay_sec": 0, 00:18:52.719 "fast_io_fail_timeout_sec": 0, 00:18:52.719 "disable_auto_failback": false, 00:18:52.719 "generate_uuids": false, 00:18:52.719 "transport_tos": 0, 00:18:52.719 "nvme_error_stat": false, 00:18:52.719 "rdma_srq_size": 0, 00:18:52.719 "io_path_stat": false, 00:18:52.719 "allow_accel_sequence": false, 00:18:52.719 "rdma_max_cq_size": 0, 00:18:52.719 "rdma_cm_event_timeout_ms": 0, 00:18:52.719 "dhchap_digests": [ 00:18:52.719 "sha256", 00:18:52.719 "sha384", 00:18:52.719 "sha512" 00:18:52.719 ], 00:18:52.719 "dhchap_dhgroups": [ 00:18:52.719 "null", 00:18:52.719 "ffdhe2048", 00:18:52.719 "ffdhe3072", 00:18:52.719 "ffdhe4096", 00:18:52.719 "ffdhe6144", 00:18:52.719 "ffdhe8192" 00:18:52.719 ] 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "bdev_nvme_set_hotplug", 00:18:52.719 "params": { 00:18:52.719 "period_us": 100000, 00:18:52.719 "enable": false 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "bdev_malloc_create", 00:18:52.719 "params": { 00:18:52.719 "name": "malloc0", 00:18:52.719 "num_blocks": 8192, 00:18:52.719 "block_size": 4096, 00:18:52.719 "physical_block_size": 4096, 00:18:52.719 "uuid": "bad4ef06-836b-40f5-b10c-1f2df5c2e253", 00:18:52.719 "optimal_io_boundary": 0, 00:18:52.719 "md_size": 0, 00:18:52.719 "dif_type": 0, 00:18:52.719 "dif_is_head_of_md": false, 00:18:52.719 "dif_pi_format": 0 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "bdev_wait_for_examine" 00:18:52.719 } 00:18:52.719 ] 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "subsystem": "nbd", 00:18:52.719 "config": [] 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "subsystem": "scheduler", 00:18:52.719 "config": [ 00:18:52.719 { 00:18:52.719 "method": "framework_set_scheduler", 00:18:52.719 "params": { 00:18:52.719 "name": "static" 00:18:52.719 } 00:18:52.719 } 00:18:52.719 ] 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "subsystem": "nvmf", 00:18:52.719 "config": [ 00:18:52.719 { 00:18:52.719 "method": "nvmf_set_config", 00:18:52.719 "params": { 00:18:52.719 "discovery_filter": "match_any", 00:18:52.719 "admin_cmd_passthru": { 00:18:52.719 "identify_ctrlr": false 00:18:52.719 }, 00:18:52.719 "dhchap_digests": [ 00:18:52.719 "sha256", 00:18:52.719 "sha384", 00:18:52.719 "sha512" 00:18:52.719 ], 00:18:52.719 "dhchap_dhgroups": [ 00:18:52.719 "null", 00:18:52.719 "ffdhe2048", 00:18:52.719 "ffdhe3072", 00:18:52.719 "ffdhe4096", 00:18:52.719 "ffdhe6144", 00:18:52.719 "ffdhe8192" 00:18:52.719 ] 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "nvmf_set_max_subsystems", 00:18:52.719 "params": { 00:18:52.719 "max_subsystems": 1024 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "nvmf_set_crdt", 00:18:52.719 "params": { 00:18:52.719 "crdt1": 0, 00:18:52.719 "crdt2": 0, 00:18:52.719 "crdt3": 0 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "nvmf_create_transport", 00:18:52.719 "params": { 00:18:52.719 "trtype": "TCP", 00:18:52.719 "max_queue_depth": 128, 00:18:52.719 "max_io_qpairs_per_ctrlr": 127, 00:18:52.719 "in_capsule_data_size": 4096, 00:18:52.719 "max_io_size": 131072, 00:18:52.719 "io_unit_size": 131072, 00:18:52.719 "max_aq_depth": 128, 00:18:52.719 "num_shared_buffers": 511, 00:18:52.719 "buf_cache_size": 4294967295, 00:18:52.719 "dif_insert_or_strip": false, 00:18:52.719 "zcopy": false, 00:18:52.719 "c2h_success": false, 00:18:52.719 "sock_priority": 0, 00:18:52.719 "abort_timeout_sec": 1, 00:18:52.719 "ack_timeout": 0, 00:18:52.719 "data_wr_pool_size": 0 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "nvmf_create_subsystem", 00:18:52.719 "params": { 00:18:52.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.719 "allow_any_host": false, 00:18:52.719 "serial_number": "00000000000000000000", 00:18:52.719 "model_number": "SPDK bdev Controller", 00:18:52.719 "max_namespaces": 32, 00:18:52.719 "min_cntlid": 1, 00:18:52.719 "max_cntlid": 65519, 00:18:52.719 "ana_reporting": false 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "nvmf_subsystem_add_host", 00:18:52.719 "params": { 00:18:52.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.719 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.719 "psk": "key0" 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "nvmf_subsystem_add_ns", 00:18:52.719 "params": { 00:18:52.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.719 "namespace": { 00:18:52.719 "nsid": 1, 00:18:52.719 "bdev_name": "malloc0", 00:18:52.719 "nguid": "BAD4EF06836B40F5B10C1F2DF5C2E253", 00:18:52.719 "uuid": "bad4ef06-836b-40f5-b10c-1f2df5c2e253", 00:18:52.719 "no_auto_visible": false 00:18:52.719 } 00:18:52.719 } 00:18:52.719 }, 00:18:52.719 { 00:18:52.719 "method": "nvmf_subsystem_add_listener", 00:18:52.719 "params": { 00:18:52.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.719 "listen_address": { 00:18:52.719 "trtype": "TCP", 00:18:52.719 "adrfam": "IPv4", 00:18:52.719 "traddr": "10.0.0.2", 00:18:52.720 "trsvcid": "4420" 00:18:52.720 }, 00:18:52.720 "secure_channel": false, 00:18:52.720 "sock_impl": "ssl" 00:18:52.720 } 00:18:52.720 } 00:18:52.720 ] 00:18:52.720 } 00:18:52.720 ] 00:18:52.720 }' 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2937158 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2937158 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2937158 ']' 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.720 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.979 [2024-11-20 09:51:16.077181] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:52.979 [2024-11-20 09:51:16.077226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.979 [2024-11-20 09:51:16.154540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.979 [2024-11-20 09:51:16.195895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.979 [2024-11-20 09:51:16.195933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.979 [2024-11-20 09:51:16.195941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.979 [2024-11-20 09:51:16.195950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.979 [2024-11-20 09:51:16.195956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.979 [2024-11-20 09:51:16.196536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.239 [2024-11-20 09:51:16.409840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.239 [2024-11-20 09:51:16.441875] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.239 [2024-11-20 09:51:16.442089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2937400 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2937400 /var/tmp/bdevperf.sock 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2937400 ']' 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.807 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:53.807 "subsystems": [ 00:18:53.807 { 00:18:53.807 "subsystem": "keyring", 00:18:53.807 "config": [ 00:18:53.807 { 00:18:53.807 "method": "keyring_file_add_key", 00:18:53.807 "params": { 00:18:53.807 "name": "key0", 00:18:53.807 "path": "/tmp/tmp.QqDxSaR1yt" 00:18:53.807 } 00:18:53.807 } 00:18:53.807 ] 00:18:53.807 }, 00:18:53.807 { 00:18:53.807 "subsystem": "iobuf", 00:18:53.807 "config": [ 00:18:53.807 { 00:18:53.807 "method": "iobuf_set_options", 00:18:53.807 "params": { 00:18:53.807 "small_pool_count": 8192, 00:18:53.807 "large_pool_count": 1024, 00:18:53.807 "small_bufsize": 8192, 00:18:53.807 "large_bufsize": 135168, 00:18:53.807 "enable_numa": false 00:18:53.807 } 00:18:53.808 } 00:18:53.808 ] 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "subsystem": "sock", 00:18:53.808 "config": [ 00:18:53.808 { 00:18:53.808 "method": "sock_set_default_impl", 00:18:53.808 "params": { 00:18:53.808 "impl_name": "posix" 00:18:53.808 } 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "method": "sock_impl_set_options", 00:18:53.808 "params": { 00:18:53.808 "impl_name": "ssl", 00:18:53.808 "recv_buf_size": 4096, 00:18:53.808 "send_buf_size": 4096, 00:18:53.808 "enable_recv_pipe": true, 00:18:53.808 "enable_quickack": false, 00:18:53.808 "enable_placement_id": 0, 00:18:53.808 "enable_zerocopy_send_server": true, 00:18:53.808 "enable_zerocopy_send_client": false, 00:18:53.808 "zerocopy_threshold": 0, 00:18:53.808 "tls_version": 0, 00:18:53.808 "enable_ktls": false 00:18:53.808 } 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "method": "sock_impl_set_options", 00:18:53.808 "params": { 00:18:53.808 "impl_name": "posix", 00:18:53.808 "recv_buf_size": 2097152, 00:18:53.808 "send_buf_size": 2097152, 00:18:53.808 "enable_recv_pipe": true, 00:18:53.808 "enable_quickack": false, 00:18:53.808 "enable_placement_id": 0, 00:18:53.808 "enable_zerocopy_send_server": true, 00:18:53.808 "enable_zerocopy_send_client": false, 00:18:53.808 "zerocopy_threshold": 0, 00:18:53.808 "tls_version": 0, 00:18:53.808 "enable_ktls": false 00:18:53.808 } 00:18:53.808 } 00:18:53.808 ] 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "subsystem": "vmd", 00:18:53.808 "config": [] 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "subsystem": "accel", 00:18:53.808 "config": [ 00:18:53.808 { 00:18:53.808 "method": "accel_set_options", 00:18:53.808 "params": { 00:18:53.808 "small_cache_size": 128, 00:18:53.808 "large_cache_size": 16, 00:18:53.808 "task_count": 2048, 00:18:53.808 "sequence_count": 2048, 00:18:53.808 "buf_count": 2048 00:18:53.808 } 00:18:53.808 } 00:18:53.808 ] 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "subsystem": "bdev", 00:18:53.808 "config": [ 00:18:53.808 { 00:18:53.808 "method": "bdev_set_options", 00:18:53.808 "params": { 00:18:53.808 "bdev_io_pool_size": 65535, 00:18:53.808 "bdev_io_cache_size": 256, 00:18:53.808 "bdev_auto_examine": true, 00:18:53.808 "iobuf_small_cache_size": 128, 00:18:53.808 "iobuf_large_cache_size": 16 00:18:53.808 } 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "method": "bdev_raid_set_options", 00:18:53.808 "params": { 00:18:53.808 "process_window_size_kb": 1024, 00:18:53.808 "process_max_bandwidth_mb_sec": 0 00:18:53.808 } 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "method": "bdev_iscsi_set_options", 00:18:53.808 "params": { 00:18:53.808 "timeout_sec": 30 00:18:53.808 } 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "method": "bdev_nvme_set_options", 00:18:53.808 "params": { 00:18:53.808 "action_on_timeout": "none", 00:18:53.808 "timeout_us": 0, 00:18:53.808 "timeout_admin_us": 0, 00:18:53.808 "keep_alive_timeout_ms": 10000, 00:18:53.808 "arbitration_burst": 0, 00:18:53.808 "low_priority_weight": 0, 00:18:53.808 "medium_priority_weight": 0, 00:18:53.808 "high_priority_weight": 0, 00:18:53.808 "nvme_adminq_poll_period_us": 10000, 00:18:53.808 "nvme_ioq_poll_period_us": 0, 00:18:53.808 "io_queue_requests": 512, 00:18:53.808 "delay_cmd_submit": true, 00:18:53.808 "transport_retry_count": 4, 00:18:53.808 "bdev_retry_count": 3, 00:18:53.808 "transport_ack_timeout": 0, 00:18:53.808 "ctrlr_loss_timeout_sec": 0, 00:18:53.808 "reconnect_delay_sec": 0, 00:18:53.808 "fast_io_fail_timeout_sec": 0, 00:18:53.808 "disable_auto_failback": false, 00:18:53.808 "generate_uuids": false, 00:18:53.808 "transport_tos": 0, 00:18:53.808 "nvme_error_stat": false, 00:18:53.808 "rdma_srq_size": 0, 00:18:53.808 "io_path_stat": false, 00:18:53.808 "allow_accel_sequence": false, 00:18:53.808 "rdma_max_cq_size": 0, 00:18:53.808 "rdma_cm_event_timeout_ms": 0, 00:18:53.808 "dhchap_digests": [ 00:18:53.808 "sha256", 00:18:53.808 "sha384", 00:18:53.808 "sha512" 00:18:53.808 ], 00:18:53.808 "dhchap_dhgroups": [ 00:18:53.808 "null", 00:18:53.808 "ffdhe2048", 00:18:53.808 "ffdhe3072", 00:18:53.808 "ffdhe4096", 00:18:53.808 "ffdhe6144", 00:18:53.808 "ffdhe8192" 00:18:53.808 ] 00:18:53.808 } 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "method": "bdev_nvme_attach_controller", 00:18:53.808 "params": { 00:18:53.808 "name": "nvme0", 00:18:53.808 "trtype": "TCP", 00:18:53.808 "adrfam": "IPv4", 00:18:53.808 "traddr": "10.0.0.2", 00:18:53.808 "trsvcid": "4420", 00:18:53.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.808 "prchk_reftag": false, 00:18:53.808 "prchk_guard": false, 00:18:53.808 "ctrlr_loss_timeout_sec": 0, 00:18:53.808 "reconnect_delay_sec": 0, 00:18:53.808 "fast_io_fail_timeout_sec": 0, 00:18:53.808 "psk": "key0", 00:18:53.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.808 "hdgst": false, 00:18:53.808 "ddgst": false, 00:18:53.808 "multipath": "multipath" 00:18:53.808 } 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "method": "bdev_nvme_set_hotplug", 00:18:53.808 "params": { 00:18:53.808 "period_us": 100000, 00:18:53.808 "enable": false 00:18:53.808 } 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "method": "bdev_enable_histogram", 00:18:53.808 "params": { 00:18:53.808 "name": "nvme0n1", 00:18:53.808 "enable": true 00:18:53.808 } 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "method": "bdev_wait_for_examine" 00:18:53.808 } 00:18:53.808 ] 00:18:53.808 }, 00:18:53.808 { 00:18:53.808 "subsystem": "nbd", 00:18:53.808 "config": [] 00:18:53.808 } 00:18:53.808 ] 00:18:53.809 }' 00:18:53.809 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.809 09:51:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.809 [2024-11-20 09:51:16.990962] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:18:53.809 [2024-11-20 09:51:16.991009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937400 ] 00:18:53.809 [2024-11-20 09:51:17.063964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.809 [2024-11-20 09:51:17.106435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.068 [2024-11-20 09:51:17.258693] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.635 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.635 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.635 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:54.635 09:51:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:54.893 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.893 09:51:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.893 Running I/O for 1 seconds... 00:18:55.829 5317.00 IOPS, 20.77 MiB/s 00:18:55.829 Latency(us) 00:18:55.829 [2024-11-20T08:51:19.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.829 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.829 Verification LBA range: start 0x0 length 0x2000 00:18:55.829 nvme0n1 : 1.01 5378.21 21.01 0.00 0.00 23641.23 5214.39 23706.94 00:18:55.829 [2024-11-20T08:51:19.161Z] =================================================================================================================== 00:18:55.829 [2024-11-20T08:51:19.161Z] Total : 5378.21 21.01 0.00 0.00 23641.23 5214.39 23706.94 00:18:55.829 { 00:18:55.829 "results": [ 00:18:55.829 { 00:18:55.829 "job": "nvme0n1", 00:18:55.829 "core_mask": "0x2", 00:18:55.829 "workload": "verify", 00:18:55.829 "status": "finished", 00:18:55.829 "verify_range": { 00:18:55.829 "start": 0, 00:18:55.829 "length": 8192 00:18:55.829 }, 00:18:55.829 "queue_depth": 128, 00:18:55.829 "io_size": 4096, 00:18:55.829 "runtime": 1.012419, 00:18:55.829 "iops": 5378.2080344205315, 00:18:55.829 "mibps": 21.0086251344552, 00:18:55.829 "io_failed": 0, 00:18:55.829 "io_timeout": 0, 00:18:55.829 "avg_latency_us": 23641.22957831277, 00:18:55.829 "min_latency_us": 5214.3860869565215, 00:18:55.829 "max_latency_us": 23706.935652173914 00:18:55.829 } 00:18:55.829 ], 00:18:55.829 "core_count": 1 00:18:55.829 } 00:18:55.829 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:55.829 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:55.829 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:55.829 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:55.829 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:55.829 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:55.829 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:56.089 nvmf_trace.0 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2937400 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2937400 ']' 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2937400 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2937400 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2937400' 00:18:56.089 killing process with pid 2937400 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2937400 00:18:56.089 Received shutdown signal, test time was about 1.000000 seconds 00:18:56.089 00:18:56.089 Latency(us) 00:18:56.089 [2024-11-20T08:51:19.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.089 [2024-11-20T08:51:19.421Z] =================================================================================================================== 00:18:56.089 [2024-11-20T08:51:19.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.089 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2937400 00:18:56.348 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:56.349 rmmod nvme_tcp 00:18:56.349 rmmod nvme_fabrics 00:18:56.349 rmmod nvme_keyring 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2937158 ']' 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2937158 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2937158 ']' 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2937158 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2937158 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2937158' 00:18:56.349 killing process with pid 2937158 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2937158 00:18:56.349 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2937158 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.609 09:51:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.515 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:58.515 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aQDwxO8lyD /tmp/tmp.KwVVRpH0ue /tmp/tmp.QqDxSaR1yt 00:18:58.515 00:18:58.515 real 1m19.394s 00:18:58.515 user 2m2.092s 00:18:58.515 sys 0m30.015s 00:18:58.515 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.515 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.515 ************************************ 00:18:58.515 END TEST nvmf_tls 00:18:58.515 ************************************ 00:18:58.515 09:51:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:58.515 09:51:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.515 09:51:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.515 09:51:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.776 ************************************ 00:18:58.776 START TEST nvmf_fips 00:18:58.776 ************************************ 00:18:58.776 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:58.776 * Looking for test storage... 00:18:58.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:58.776 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:18:58.776 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1703 -- # lcov --version 00:18:58.776 09:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:18:58.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.776 --rc genhtml_branch_coverage=1 00:18:58.776 --rc genhtml_function_coverage=1 00:18:58.776 --rc genhtml_legend=1 00:18:58.776 --rc geninfo_all_blocks=1 00:18:58.776 --rc geninfo_unexecuted_blocks=1 00:18:58.776 00:18:58.776 ' 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:18:58.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.776 --rc genhtml_branch_coverage=1 00:18:58.776 --rc genhtml_function_coverage=1 00:18:58.776 --rc genhtml_legend=1 00:18:58.776 --rc geninfo_all_blocks=1 00:18:58.776 --rc geninfo_unexecuted_blocks=1 00:18:58.776 00:18:58.776 ' 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:18:58.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.776 --rc genhtml_branch_coverage=1 00:18:58.776 --rc genhtml_function_coverage=1 00:18:58.776 --rc genhtml_legend=1 00:18:58.776 --rc geninfo_all_blocks=1 00:18:58.776 --rc geninfo_unexecuted_blocks=1 00:18:58.776 00:18:58.776 ' 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:18:58.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.776 --rc genhtml_branch_coverage=1 00:18:58.776 --rc genhtml_function_coverage=1 00:18:58.776 --rc genhtml_legend=1 00:18:58.776 --rc geninfo_all_blocks=1 00:18:58.776 --rc geninfo_unexecuted_blocks=1 00:18:58.776 00:18:58.776 ' 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.776 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.777 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.038 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:59.039 Error setting digest 00:18:59.039 400242611B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:59.039 400242611B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.039 09:51:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:05.614 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:05.614 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:05.614 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:05.615 Found net devices under 0000:86:00.0: cvl_0_0 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:05.615 Found net devices under 0000:86:00.1: cvl_0_1 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:05.615 09:51:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:05.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:05.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:19:05.615 00:19:05.615 --- 10.0.0.2 ping statistics --- 00:19:05.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.615 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:05.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:05.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:19:05.615 00:19:05.615 --- 10.0.0.1 ping statistics --- 00:19:05.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:05.615 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2941423 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2941423 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2941423 ']' 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.615 09:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.615 [2024-11-20 09:51:28.277471] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:19:05.615 [2024-11-20 09:51:28.277521] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.615 [2024-11-20 09:51:28.358292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.615 [2024-11-20 09:51:28.401114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.615 [2024-11-20 09:51:28.401148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.615 [2024-11-20 09:51:28.401155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.615 [2024-11-20 09:51:28.401162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.615 [2024-11-20 09:51:28.401167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.615 [2024-11-20 09:51:28.401734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.874 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Uub 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Uub 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Uub 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Uub 00:19:05.875 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.133 [2024-11-20 09:51:29.334061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.133 [2024-11-20 09:51:29.350067] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.133 [2024-11-20 09:51:29.350225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.133 malloc0 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2941606 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2941606 /var/tmp/bdevperf.sock 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2941606 ']' 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.133 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.392 [2024-11-20 09:51:29.480113] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:19:06.392 [2024-11-20 09:51:29.480165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941606 ] 00:19:06.392 [2024-11-20 09:51:29.552747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.392 [2024-11-20 09:51:29.593247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.392 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.392 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:06.392 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Uub 00:19:06.651 09:51:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.910 [2024-11-20 09:51:30.056672] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.910 TLSTESTn1 00:19:06.910 09:51:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:06.910 Running I/O for 10 seconds... 00:19:09.222 5323.00 IOPS, 20.79 MiB/s [2024-11-20T08:51:33.491Z] 5360.50 IOPS, 20.94 MiB/s [2024-11-20T08:51:34.426Z] 5399.67 IOPS, 21.09 MiB/s [2024-11-20T08:51:35.362Z] 5432.50 IOPS, 21.22 MiB/s [2024-11-20T08:51:36.300Z] 5446.20 IOPS, 21.27 MiB/s [2024-11-20T08:51:37.678Z] 5435.50 IOPS, 21.23 MiB/s [2024-11-20T08:51:38.615Z] 5410.00 IOPS, 21.13 MiB/s [2024-11-20T08:51:39.552Z] 5413.88 IOPS, 21.15 MiB/s [2024-11-20T08:51:40.489Z] 5426.11 IOPS, 21.20 MiB/s [2024-11-20T08:51:40.489Z] 5422.30 IOPS, 21.18 MiB/s 00:19:17.157 Latency(us) 00:19:17.157 [2024-11-20T08:51:40.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.157 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.157 Verification LBA range: start 0x0 length 0x2000 00:19:17.157 TLSTESTn1 : 10.02 5426.32 21.20 0.00 0.00 23552.58 5584.81 24504.77 00:19:17.157 [2024-11-20T08:51:40.489Z] =================================================================================================================== 00:19:17.157 [2024-11-20T08:51:40.489Z] Total : 5426.32 21.20 0.00 0.00 23552.58 5584.81 24504.77 00:19:17.157 { 00:19:17.157 "results": [ 00:19:17.157 { 00:19:17.157 "job": "TLSTESTn1", 00:19:17.157 "core_mask": "0x4", 00:19:17.157 "workload": "verify", 00:19:17.157 "status": "finished", 00:19:17.157 "verify_range": { 00:19:17.157 "start": 0, 00:19:17.157 "length": 8192 00:19:17.157 }, 00:19:17.157 "queue_depth": 128, 00:19:17.157 "io_size": 4096, 00:19:17.157 "runtime": 10.015991, 00:19:17.157 "iops": 5426.32276726287, 00:19:17.157 "mibps": 21.196573309620586, 00:19:17.157 "io_failed": 0, 00:19:17.157 "io_timeout": 0, 00:19:17.157 "avg_latency_us": 23552.583836870523, 00:19:17.157 "min_latency_us": 5584.806956521739, 00:19:17.157 "max_latency_us": 24504.765217391305 00:19:17.157 } 00:19:17.157 ], 00:19:17.157 "core_count": 1 00:19:17.157 } 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:17.157 nvmf_trace.0 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2941606 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2941606 ']' 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2941606 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2941606 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2941606' 00:19:17.157 killing process with pid 2941606 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2941606 00:19:17.157 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.157 00:19:17.157 Latency(us) 00:19:17.157 [2024-11-20T08:51:40.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.157 [2024-11-20T08:51:40.489Z] =================================================================================================================== 00:19:17.157 [2024-11-20T08:51:40.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.157 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2941606 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:17.417 rmmod nvme_tcp 00:19:17.417 rmmod nvme_fabrics 00:19:17.417 rmmod nvme_keyring 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2941423 ']' 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2941423 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2941423 ']' 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2941423 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2941423 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2941423' 00:19:17.417 killing process with pid 2941423 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2941423 00:19:17.417 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2941423 00:19:17.676 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:17.676 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:17.676 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:17.676 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:17.676 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:17.676 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:17.677 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:17.677 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:17.677 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:17.677 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.677 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.677 09:51:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.213 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:20.213 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Uub 00:19:20.213 00:19:20.213 real 0m21.066s 00:19:20.213 user 0m22.194s 00:19:20.213 sys 0m9.517s 00:19:20.213 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.213 09:51:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.213 ************************************ 00:19:20.213 END TEST nvmf_fips 00:19:20.213 ************************************ 00:19:20.213 09:51:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:20.213 09:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.213 09:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.213 09:51:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:20.213 ************************************ 00:19:20.213 START TEST nvmf_control_msg_list 00:19:20.213 ************************************ 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:20.213 * Looking for test storage... 00:19:20.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1703 -- # lcov --version 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:19:20.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.213 --rc genhtml_branch_coverage=1 00:19:20.213 --rc genhtml_function_coverage=1 00:19:20.213 --rc genhtml_legend=1 00:19:20.213 --rc geninfo_all_blocks=1 00:19:20.213 --rc geninfo_unexecuted_blocks=1 00:19:20.213 00:19:20.213 ' 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:19:20.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.213 --rc genhtml_branch_coverage=1 00:19:20.213 --rc genhtml_function_coverage=1 00:19:20.213 --rc genhtml_legend=1 00:19:20.213 --rc geninfo_all_blocks=1 00:19:20.213 --rc geninfo_unexecuted_blocks=1 00:19:20.213 00:19:20.213 ' 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:19:20.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.213 --rc genhtml_branch_coverage=1 00:19:20.213 --rc genhtml_function_coverage=1 00:19:20.213 --rc genhtml_legend=1 00:19:20.213 --rc geninfo_all_blocks=1 00:19:20.213 --rc geninfo_unexecuted_blocks=1 00:19:20.213 00:19:20.213 ' 00:19:20.213 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:19:20.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.213 --rc genhtml_branch_coverage=1 00:19:20.213 --rc genhtml_function_coverage=1 00:19:20.213 --rc genhtml_legend=1 00:19:20.213 --rc geninfo_all_blocks=1 00:19:20.213 --rc geninfo_unexecuted_blocks=1 00:19:20.213 00:19:20.214 ' 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.214 09:51:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:26.787 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.787 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:26.788 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:26.788 Found net devices under 0000:86:00.0: cvl_0_0 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:26.788 Found net devices under 0000:86:00.1: cvl_0_1 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.788 09:51:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:26.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:19:26.788 00:19:26.788 --- 10.0.0.2 ping statistics --- 00:19:26.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.788 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:19:26.788 00:19:26.788 --- 10.0.0.1 ping statistics --- 00:19:26.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.788 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2946821 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2946821 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2946821 ']' 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.788 [2024-11-20 09:51:49.205590] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:19:26.788 [2024-11-20 09:51:49.205637] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.788 [2024-11-20 09:51:49.288265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.788 [2024-11-20 09:51:49.327563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.788 [2024-11-20 09:51:49.327602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.788 [2024-11-20 09:51:49.327610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.788 [2024-11-20 09:51:49.327616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.788 [2024-11-20 09:51:49.327622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.788 [2024-11-20 09:51:49.328214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.788 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.789 [2024-11-20 09:51:49.475998] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.789 Malloc0 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:26.789 [2024-11-20 09:51:49.516486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2947018 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2947019 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2947021 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.789 09:51:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2947018 00:19:26.789 [2024-11-20 09:51:49.605243] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:26.789 [2024-11-20 09:51:49.605427] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:26.789 [2024-11-20 09:51:49.605577] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:27.355 Initializing NVMe Controllers 00:19:27.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:27.355 Initialization complete. Launching workers. 00:19:27.355 ======================================================== 00:19:27.355 Latency(us) 00:19:27.355 Device Information : IOPS MiB/s Average min max 00:19:27.355 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5082.93 19.86 196.34 125.24 358.65 00:19:27.355 ======================================================== 00:19:27.355 Total : 5082.93 19.86 196.34 125.24 358.65 00:19:27.355 00:19:27.613 Initializing NVMe Controllers 00:19:27.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:27.613 Initialization complete. Launching workers. 00:19:27.613 ======================================================== 00:19:27.613 Latency(us) 00:19:27.613 Device Information : IOPS MiB/s Average min max 00:19:27.613 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40985.15 40833.89 41944.25 00:19:27.613 ======================================================== 00:19:27.613 Total : 25.00 0.10 40985.15 40833.89 41944.25 00:19:27.613 00:19:27.613 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2947019 00:19:27.613 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2947021 00:19:27.613 Initializing NVMe Controllers 00:19:27.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:27.613 Initialization complete. Launching workers. 00:19:27.613 ======================================================== 00:19:27.613 Latency(us) 00:19:27.613 Device Information : IOPS MiB/s Average min max 00:19:27.613 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40926.32 40651.05 41911.07 00:19:27.613 ======================================================== 00:19:27.613 Total : 25.00 0.10 40926.32 40651.05 41911.07 00:19:27.613 00:19:27.613 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:27.613 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:27.613 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:27.613 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:27.613 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:27.613 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:27.613 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.614 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:27.614 rmmod nvme_tcp 00:19:27.614 rmmod nvme_fabrics 00:19:27.614 rmmod nvme_keyring 00:19:27.614 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.873 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:27.873 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:27.873 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2946821 ']' 00:19:27.873 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2946821 00:19:27.873 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2946821 ']' 00:19:27.873 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2946821 00:19:27.873 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:27.873 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.873 09:51:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2946821 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2946821' 00:19:27.873 killing process with pid 2946821 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2946821 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2946821 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.873 09:51:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:30.459 00:19:30.459 real 0m10.225s 00:19:30.459 user 0m6.856s 00:19:30.459 sys 0m5.416s 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.459 ************************************ 00:19:30.459 END TEST nvmf_control_msg_list 00:19:30.459 ************************************ 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:30.459 ************************************ 00:19:30.459 START TEST nvmf_wait_for_buf 00:19:30.459 ************************************ 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:30.459 * Looking for test storage... 00:19:30.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1703 -- # lcov --version 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:30.459 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:19:30.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.460 --rc genhtml_branch_coverage=1 00:19:30.460 --rc genhtml_function_coverage=1 00:19:30.460 --rc genhtml_legend=1 00:19:30.460 --rc geninfo_all_blocks=1 00:19:30.460 --rc geninfo_unexecuted_blocks=1 00:19:30.460 00:19:30.460 ' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:19:30.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.460 --rc genhtml_branch_coverage=1 00:19:30.460 --rc genhtml_function_coverage=1 00:19:30.460 --rc genhtml_legend=1 00:19:30.460 --rc geninfo_all_blocks=1 00:19:30.460 --rc geninfo_unexecuted_blocks=1 00:19:30.460 00:19:30.460 ' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:19:30.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.460 --rc genhtml_branch_coverage=1 00:19:30.460 --rc genhtml_function_coverage=1 00:19:30.460 --rc genhtml_legend=1 00:19:30.460 --rc geninfo_all_blocks=1 00:19:30.460 --rc geninfo_unexecuted_blocks=1 00:19:30.460 00:19:30.460 ' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:19:30.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.460 --rc genhtml_branch_coverage=1 00:19:30.460 --rc genhtml_function_coverage=1 00:19:30.460 --rc genhtml_legend=1 00:19:30.460 --rc geninfo_all_blocks=1 00:19:30.460 --rc geninfo_unexecuted_blocks=1 00:19:30.460 00:19:30.460 ' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:30.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:30.460 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.461 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.461 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.461 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:30.461 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:30.461 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:30.461 09:51:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.034 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:37.035 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:37.035 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:37.035 Found net devices under 0000:86:00.0: cvl_0_0 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:37.035 Found net devices under 0000:86:00.1: cvl_0_1 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.035 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:37.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:19:37.036 00:19:37.036 --- 10.0.0.2 ping statistics --- 00:19:37.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.036 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:19:37.036 00:19:37.036 --- 10.0.0.1 ping statistics --- 00:19:37.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.036 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2950729 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2950729 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2950729 ']' 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 [2024-11-20 09:51:59.525161] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:19:37.036 [2024-11-20 09:51:59.525213] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.036 [2024-11-20 09:51:59.603291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.036 [2024-11-20 09:51:59.646485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.036 [2024-11-20 09:51:59.646522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.036 [2024-11-20 09:51:59.646533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.036 [2024-11-20 09:51:59.646539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.036 [2024-11-20 09:51:59.646545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.036 [2024-11-20 09:51:59.647128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 Malloc0 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 [2024-11-20 09:51:59.815668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.036 [2024-11-20 09:51:59.843841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.036 09:51:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:37.036 [2024-11-20 09:51:59.925027] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:38.415 Initializing NVMe Controllers 00:19:38.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:38.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:38.415 Initialization complete. Launching workers. 00:19:38.415 ======================================================== 00:19:38.415 Latency(us) 00:19:38.415 Device Information : IOPS MiB/s Average min max 00:19:38.415 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32230.55 7243.92 63848.04 00:19:38.415 ======================================================== 00:19:38.415 Total : 129.00 16.12 32230.55 7243.92 63848.04 00:19:38.415 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.415 rmmod nvme_tcp 00:19:38.415 rmmod nvme_fabrics 00:19:38.415 rmmod nvme_keyring 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2950729 ']' 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2950729 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2950729 ']' 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2950729 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2950729 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2950729' 00:19:38.415 killing process with pid 2950729 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2950729 00:19:38.415 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2950729 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.674 09:52:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.579 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:40.579 00:19:40.579 real 0m10.563s 00:19:40.579 user 0m4.095s 00:19:40.579 sys 0m4.925s 00:19:40.579 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.579 09:52:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.579 ************************************ 00:19:40.579 END TEST nvmf_wait_for_buf 00:19:40.579 ************************************ 00:19:40.838 09:52:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:40.838 09:52:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:40.839 09:52:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:40.839 09:52:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:40.839 09:52:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.839 09:52:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.489 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:47.490 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:47.490 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:47.490 Found net devices under 0000:86:00.0: cvl_0_0 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:47.490 Found net devices under 0000:86:00.1: cvl_0_1 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:47.490 ************************************ 00:19:47.490 START TEST nvmf_perf_adq 00:19:47.490 ************************************ 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:47.490 * Looking for test storage... 00:19:47.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1703 -- # lcov --version 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:19:47.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.490 --rc genhtml_branch_coverage=1 00:19:47.490 --rc genhtml_function_coverage=1 00:19:47.490 --rc genhtml_legend=1 00:19:47.490 --rc geninfo_all_blocks=1 00:19:47.490 --rc geninfo_unexecuted_blocks=1 00:19:47.490 00:19:47.490 ' 00:19:47.490 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:19:47.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.490 --rc genhtml_branch_coverage=1 00:19:47.490 --rc genhtml_function_coverage=1 00:19:47.490 --rc genhtml_legend=1 00:19:47.491 --rc geninfo_all_blocks=1 00:19:47.491 --rc geninfo_unexecuted_blocks=1 00:19:47.491 00:19:47.491 ' 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:19:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.491 --rc genhtml_branch_coverage=1 00:19:47.491 --rc genhtml_function_coverage=1 00:19:47.491 --rc genhtml_legend=1 00:19:47.491 --rc geninfo_all_blocks=1 00:19:47.491 --rc geninfo_unexecuted_blocks=1 00:19:47.491 00:19:47.491 ' 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:19:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.491 --rc genhtml_branch_coverage=1 00:19:47.491 --rc genhtml_function_coverage=1 00:19:47.491 --rc genhtml_legend=1 00:19:47.491 --rc geninfo_all_blocks=1 00:19:47.491 --rc geninfo_unexecuted_blocks=1 00:19:47.491 00:19:47.491 ' 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.491 09:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:52.795 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:52.796 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:52.796 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:52.796 Found net devices under 0000:86:00.0: cvl_0_0 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:52.796 Found net devices under 0000:86:00.1: cvl_0_1 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:52.796 09:52:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:53.364 09:52:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:55.267 09:52:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.538 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:00.539 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:00.539 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:00.539 Found net devices under 0000:86:00.0: cvl_0_0 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:00.539 Found net devices under 0000:86:00.1: cvl_0_1 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:00.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:20:00.539 00:20:00.539 --- 10.0.0.2 ping statistics --- 00:20:00.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.539 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:20:00.539 00:20:00.539 --- 10.0.0.1 ping statistics --- 00:20:00.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.539 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2958965 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2958965 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2958965 ']' 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.539 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.539 [2024-11-20 09:52:23.753061] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:20:00.539 [2024-11-20 09:52:23.753116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.539 [2024-11-20 09:52:23.835069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.798 [2024-11-20 09:52:23.880339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.798 [2024-11-20 09:52:23.880375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.798 [2024-11-20 09:52:23.880382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.798 [2024-11-20 09:52:23.880388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.798 [2024-11-20 09:52:23.880394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.798 [2024-11-20 09:52:23.881838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.798 [2024-11-20 09:52:23.881945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.798 [2024-11-20 09:52:23.882056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.798 [2024-11-20 09:52:23.882057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.798 09:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.798 [2024-11-20 09:52:24.079835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.798 Malloc1 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.798 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.057 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.057 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:01.057 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.057 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.057 [2024-11-20 09:52:24.136895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.057 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.057 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2959203 00:20:01.057 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:01.057 09:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:02.963 "tick_rate": 2300000000, 00:20:02.963 "poll_groups": [ 00:20:02.963 { 00:20:02.963 "name": "nvmf_tgt_poll_group_000", 00:20:02.963 "admin_qpairs": 1, 00:20:02.963 "io_qpairs": 1, 00:20:02.963 "current_admin_qpairs": 1, 00:20:02.963 "current_io_qpairs": 1, 00:20:02.963 "pending_bdev_io": 0, 00:20:02.963 "completed_nvme_io": 19072, 00:20:02.963 "transports": [ 00:20:02.963 { 00:20:02.963 "trtype": "TCP" 00:20:02.963 } 00:20:02.963 ] 00:20:02.963 }, 00:20:02.963 { 00:20:02.963 "name": "nvmf_tgt_poll_group_001", 00:20:02.963 "admin_qpairs": 0, 00:20:02.963 "io_qpairs": 1, 00:20:02.963 "current_admin_qpairs": 0, 00:20:02.963 "current_io_qpairs": 1, 00:20:02.963 "pending_bdev_io": 0, 00:20:02.963 "completed_nvme_io": 19567, 00:20:02.963 "transports": [ 00:20:02.963 { 00:20:02.963 "trtype": "TCP" 00:20:02.963 } 00:20:02.963 ] 00:20:02.963 }, 00:20:02.963 { 00:20:02.963 "name": "nvmf_tgt_poll_group_002", 00:20:02.963 "admin_qpairs": 0, 00:20:02.963 "io_qpairs": 1, 00:20:02.963 "current_admin_qpairs": 0, 00:20:02.963 "current_io_qpairs": 1, 00:20:02.963 "pending_bdev_io": 0, 00:20:02.963 "completed_nvme_io": 19557, 00:20:02.963 "transports": [ 00:20:02.963 { 00:20:02.963 "trtype": "TCP" 00:20:02.963 } 00:20:02.963 ] 00:20:02.963 }, 00:20:02.963 { 00:20:02.963 "name": "nvmf_tgt_poll_group_003", 00:20:02.963 "admin_qpairs": 0, 00:20:02.963 "io_qpairs": 1, 00:20:02.963 "current_admin_qpairs": 0, 00:20:02.963 "current_io_qpairs": 1, 00:20:02.963 "pending_bdev_io": 0, 00:20:02.963 "completed_nvme_io": 19119, 00:20:02.963 "transports": [ 00:20:02.963 { 00:20:02.963 "trtype": "TCP" 00:20:02.963 } 00:20:02.963 ] 00:20:02.963 } 00:20:02.963 ] 00:20:02.963 }' 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:02.963 09:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2959203 00:20:11.083 Initializing NVMe Controllers 00:20:11.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:11.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:11.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:11.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:11.083 Initialization complete. Launching workers. 00:20:11.083 ======================================================== 00:20:11.083 Latency(us) 00:20:11.083 Device Information : IOPS MiB/s Average min max 00:20:11.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10169.90 39.73 6293.36 2368.01 10040.71 00:20:11.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10378.40 40.54 6167.68 2421.45 14345.85 00:20:11.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10298.20 40.23 6216.01 2353.22 10633.59 00:20:11.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10210.30 39.88 6267.75 2360.30 10861.05 00:20:11.083 ======================================================== 00:20:11.083 Total : 41056.80 160.38 6235.82 2353.22 14345.85 00:20:11.083 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.083 rmmod nvme_tcp 00:20:11.083 rmmod nvme_fabrics 00:20:11.083 rmmod nvme_keyring 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2958965 ']' 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2958965 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2958965 ']' 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2958965 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958965 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958965' 00:20:11.083 killing process with pid 2958965 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2958965 00:20:11.083 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2958965 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.342 09:52:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.879 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:13.879 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:13.879 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:13.879 09:52:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:14.447 09:52:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:16.352 09:52:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:21.640 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:21.640 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.640 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:21.641 Found net devices under 0000:86:00.0: cvl_0_0 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:21.641 Found net devices under 0000:86:00.1: cvl_0_1 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:21.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:20:21.641 00:20:21.641 --- 10.0.0.2 ping statistics --- 00:20:21.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.641 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:20:21.641 00:20:21.641 --- 10.0.0.1 ping statistics --- 00:20:21.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.641 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:21.641 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:21.900 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:21.900 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:21.900 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:21.900 09:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:21.900 net.core.busy_poll = 1 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:21.900 net.core.busy_read = 1 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2962791 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2962791 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2962791 ']' 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.900 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.159 [2024-11-20 09:52:45.270793] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:20:22.159 [2024-11-20 09:52:45.270848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.159 [2024-11-20 09:52:45.353861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.159 [2024-11-20 09:52:45.399731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.159 [2024-11-20 09:52:45.399766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.159 [2024-11-20 09:52:45.399790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.159 [2024-11-20 09:52:45.399796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.159 [2024-11-20 09:52:45.399801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.159 [2024-11-20 09:52:45.401270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.159 [2024-11-20 09:52:45.401292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.159 [2024-11-20 09:52:45.401383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.159 [2024-11-20 09:52:45.401383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.159 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.419 [2024-11-20 09:52:45.611522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.419 Malloc1 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.419 [2024-11-20 09:52:45.669068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2963007 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:22.419 09:52:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:24.954 "tick_rate": 2300000000, 00:20:24.954 "poll_groups": [ 00:20:24.954 { 00:20:24.954 "name": "nvmf_tgt_poll_group_000", 00:20:24.954 "admin_qpairs": 1, 00:20:24.954 "io_qpairs": 1, 00:20:24.954 "current_admin_qpairs": 1, 00:20:24.954 "current_io_qpairs": 1, 00:20:24.954 "pending_bdev_io": 0, 00:20:24.954 "completed_nvme_io": 24714, 00:20:24.954 "transports": [ 00:20:24.954 { 00:20:24.954 "trtype": "TCP" 00:20:24.954 } 00:20:24.954 ] 00:20:24.954 }, 00:20:24.954 { 00:20:24.954 "name": "nvmf_tgt_poll_group_001", 00:20:24.954 "admin_qpairs": 0, 00:20:24.954 "io_qpairs": 3, 00:20:24.954 "current_admin_qpairs": 0, 00:20:24.954 "current_io_qpairs": 3, 00:20:24.954 "pending_bdev_io": 0, 00:20:24.954 "completed_nvme_io": 29997, 00:20:24.954 "transports": [ 00:20:24.954 { 00:20:24.954 "trtype": "TCP" 00:20:24.954 } 00:20:24.954 ] 00:20:24.954 }, 00:20:24.954 { 00:20:24.954 "name": "nvmf_tgt_poll_group_002", 00:20:24.954 "admin_qpairs": 0, 00:20:24.954 "io_qpairs": 0, 00:20:24.954 "current_admin_qpairs": 0, 00:20:24.954 "current_io_qpairs": 0, 00:20:24.954 "pending_bdev_io": 0, 00:20:24.954 "completed_nvme_io": 0, 00:20:24.954 "transports": [ 00:20:24.954 { 00:20:24.954 "trtype": "TCP" 00:20:24.954 } 00:20:24.954 ] 00:20:24.954 }, 00:20:24.954 { 00:20:24.954 "name": "nvmf_tgt_poll_group_003", 00:20:24.954 "admin_qpairs": 0, 00:20:24.954 "io_qpairs": 0, 00:20:24.954 "current_admin_qpairs": 0, 00:20:24.954 "current_io_qpairs": 0, 00:20:24.954 "pending_bdev_io": 0, 00:20:24.954 "completed_nvme_io": 0, 00:20:24.954 "transports": [ 00:20:24.954 { 00:20:24.954 "trtype": "TCP" 00:20:24.954 } 00:20:24.954 ] 00:20:24.954 } 00:20:24.954 ] 00:20:24.954 }' 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:24.954 09:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2963007 00:20:33.071 Initializing NVMe Controllers 00:20:33.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:33.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:33.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:33.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:33.071 Initialization complete. Launching workers. 00:20:33.071 ======================================================== 00:20:33.071 Latency(us) 00:20:33.071 Device Information : IOPS MiB/s Average min max 00:20:33.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4935.60 19.28 13007.97 1743.83 60487.71 00:20:33.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4988.80 19.49 12831.05 1765.47 62259.68 00:20:33.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14765.20 57.68 4334.27 1768.18 45588.05 00:20:33.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5234.60 20.45 12228.67 1483.45 58895.54 00:20:33.071 ======================================================== 00:20:33.071 Total : 29924.20 116.89 8562.38 1483.45 62259.68 00:20:33.071 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.071 rmmod nvme_tcp 00:20:33.071 rmmod nvme_fabrics 00:20:33.071 rmmod nvme_keyring 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2962791 ']' 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2962791 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2962791 ']' 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2962791 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962791 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962791' 00:20:33.071 killing process with pid 2962791 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2962791 00:20:33.071 09:52:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2962791 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.071 09:52:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:36.359 00:20:36.359 real 0m49.653s 00:20:36.359 user 2m43.766s 00:20:36.359 sys 0m10.336s 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.359 ************************************ 00:20:36.359 END TEST nvmf_perf_adq 00:20:36.359 ************************************ 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:36.359 ************************************ 00:20:36.359 START TEST nvmf_shutdown 00:20:36.359 ************************************ 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:36.359 * Looking for test storage... 00:20:36.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # lcov --version 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.359 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:20:36.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.360 --rc genhtml_branch_coverage=1 00:20:36.360 --rc genhtml_function_coverage=1 00:20:36.360 --rc genhtml_legend=1 00:20:36.360 --rc geninfo_all_blocks=1 00:20:36.360 --rc geninfo_unexecuted_blocks=1 00:20:36.360 00:20:36.360 ' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:20:36.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.360 --rc genhtml_branch_coverage=1 00:20:36.360 --rc genhtml_function_coverage=1 00:20:36.360 --rc genhtml_legend=1 00:20:36.360 --rc geninfo_all_blocks=1 00:20:36.360 --rc geninfo_unexecuted_blocks=1 00:20:36.360 00:20:36.360 ' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:20:36.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.360 --rc genhtml_branch_coverage=1 00:20:36.360 --rc genhtml_function_coverage=1 00:20:36.360 --rc genhtml_legend=1 00:20:36.360 --rc geninfo_all_blocks=1 00:20:36.360 --rc geninfo_unexecuted_blocks=1 00:20:36.360 00:20:36.360 ' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:20:36.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.360 --rc genhtml_branch_coverage=1 00:20:36.360 --rc genhtml_function_coverage=1 00:20:36.360 --rc genhtml_legend=1 00:20:36.360 --rc geninfo_all_blocks=1 00:20:36.360 --rc geninfo_unexecuted_blocks=1 00:20:36.360 00:20:36.360 ' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:36.360 ************************************ 00:20:36.360 START TEST nvmf_shutdown_tc1 00:20:36.360 ************************************ 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.360 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.361 09:52:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.934 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:42.935 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:42.935 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:42.935 Found net devices under 0000:86:00.0: cvl_0_0 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:42.935 Found net devices under 0000:86:00.1: cvl_0_1 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.935 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:42.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:20:42.936 00:20:42.936 --- 10.0.0.2 ping statistics --- 00:20:42.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.936 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:20:42.936 00:20:42.936 --- 10.0.0.1 ping statistics --- 00:20:42.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.936 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2968426 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2968426 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2968426 ']' 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.936 09:53:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:42.936 [2024-11-20 09:53:05.650081] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:20:42.936 [2024-11-20 09:53:05.650126] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.936 [2024-11-20 09:53:05.732555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.936 [2024-11-20 09:53:05.772730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.936 [2024-11-20 09:53:05.772772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.936 [2024-11-20 09:53:05.772779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.936 [2024-11-20 09:53:05.772785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.936 [2024-11-20 09:53:05.772793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.936 [2024-11-20 09:53:05.774429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.936 [2024-11-20 09:53:05.774537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.936 [2024-11-20 09:53:05.774624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.936 [2024-11-20 09:53:05.774625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:43.195 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.195 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:43.195 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.195 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.195 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.453 [2024-11-20 09:53:06.533836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.453 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:43.454 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:43.454 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.454 09:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.454 Malloc1 00:20:43.454 [2024-11-20 09:53:06.641506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.454 Malloc2 00:20:43.454 Malloc3 00:20:43.454 Malloc4 00:20:43.712 Malloc5 00:20:43.712 Malloc6 00:20:43.712 Malloc7 00:20:43.712 Malloc8 00:20:43.712 Malloc9 00:20:43.712 Malloc10 00:20:43.712 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.712 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:43.712 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.712 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.971 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2968735 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2968735 /var/tmp/bdevperf.sock 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2968735 ']' 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.972 { 00:20:43.972 "params": { 00:20:43.972 "name": "Nvme$subsystem", 00:20:43.972 "trtype": "$TEST_TRANSPORT", 00:20:43.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.972 "adrfam": "ipv4", 00:20:43.972 "trsvcid": "$NVMF_PORT", 00:20:43.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.972 "hdgst": ${hdgst:-false}, 00:20:43.972 "ddgst": ${ddgst:-false} 00:20:43.972 }, 00:20:43.972 "method": "bdev_nvme_attach_controller" 00:20:43.972 } 00:20:43.972 EOF 00:20:43.972 )") 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.972 { 00:20:43.972 "params": { 00:20:43.972 "name": "Nvme$subsystem", 00:20:43.972 "trtype": "$TEST_TRANSPORT", 00:20:43.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.972 "adrfam": "ipv4", 00:20:43.972 "trsvcid": "$NVMF_PORT", 00:20:43.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.972 "hdgst": ${hdgst:-false}, 00:20:43.972 "ddgst": ${ddgst:-false} 00:20:43.972 }, 00:20:43.972 "method": "bdev_nvme_attach_controller" 00:20:43.972 } 00:20:43.972 EOF 00:20:43.972 )") 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.972 { 00:20:43.972 "params": { 00:20:43.972 "name": "Nvme$subsystem", 00:20:43.972 "trtype": "$TEST_TRANSPORT", 00:20:43.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.972 "adrfam": "ipv4", 00:20:43.972 "trsvcid": "$NVMF_PORT", 00:20:43.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.972 "hdgst": ${hdgst:-false}, 00:20:43.972 "ddgst": ${ddgst:-false} 00:20:43.972 }, 00:20:43.972 "method": "bdev_nvme_attach_controller" 00:20:43.972 } 00:20:43.972 EOF 00:20:43.972 )") 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.972 { 00:20:43.972 "params": { 00:20:43.972 "name": "Nvme$subsystem", 00:20:43.972 "trtype": "$TEST_TRANSPORT", 00:20:43.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.972 "adrfam": "ipv4", 00:20:43.972 "trsvcid": "$NVMF_PORT", 00:20:43.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.972 "hdgst": ${hdgst:-false}, 00:20:43.972 "ddgst": ${ddgst:-false} 00:20:43.972 }, 00:20:43.972 "method": "bdev_nvme_attach_controller" 00:20:43.972 } 00:20:43.972 EOF 00:20:43.972 )") 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.972 { 00:20:43.972 "params": { 00:20:43.972 "name": "Nvme$subsystem", 00:20:43.972 "trtype": "$TEST_TRANSPORT", 00:20:43.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.972 "adrfam": "ipv4", 00:20:43.972 "trsvcid": "$NVMF_PORT", 00:20:43.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.972 "hdgst": ${hdgst:-false}, 00:20:43.972 "ddgst": ${ddgst:-false} 00:20:43.972 }, 00:20:43.972 "method": "bdev_nvme_attach_controller" 00:20:43.972 } 00:20:43.972 EOF 00:20:43.972 )") 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.972 { 00:20:43.972 "params": { 00:20:43.972 "name": "Nvme$subsystem", 00:20:43.972 "trtype": "$TEST_TRANSPORT", 00:20:43.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.972 "adrfam": "ipv4", 00:20:43.972 "trsvcid": "$NVMF_PORT", 00:20:43.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.972 "hdgst": ${hdgst:-false}, 00:20:43.972 "ddgst": ${ddgst:-false} 00:20:43.972 }, 00:20:43.972 "method": "bdev_nvme_attach_controller" 00:20:43.972 } 00:20:43.972 EOF 00:20:43.972 )") 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.972 { 00:20:43.972 "params": { 00:20:43.972 "name": "Nvme$subsystem", 00:20:43.972 "trtype": "$TEST_TRANSPORT", 00:20:43.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.972 "adrfam": "ipv4", 00:20:43.972 "trsvcid": "$NVMF_PORT", 00:20:43.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.972 "hdgst": ${hdgst:-false}, 00:20:43.972 "ddgst": ${ddgst:-false} 00:20:43.972 }, 00:20:43.972 "method": "bdev_nvme_attach_controller" 00:20:43.972 } 00:20:43.972 EOF 00:20:43.972 )") 00:20:43.972 [2024-11-20 09:53:07.114120] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:20:43.972 [2024-11-20 09:53:07.114169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.972 { 00:20:43.972 "params": { 00:20:43.972 "name": "Nvme$subsystem", 00:20:43.972 "trtype": "$TEST_TRANSPORT", 00:20:43.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.972 "adrfam": "ipv4", 00:20:43.972 "trsvcid": "$NVMF_PORT", 00:20:43.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.972 "hdgst": ${hdgst:-false}, 00:20:43.972 "ddgst": ${ddgst:-false} 00:20:43.972 }, 00:20:43.972 "method": "bdev_nvme_attach_controller" 00:20:43.972 } 00:20:43.972 EOF 00:20:43.972 )") 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.972 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.972 { 00:20:43.972 "params": { 00:20:43.972 "name": "Nvme$subsystem", 00:20:43.972 "trtype": "$TEST_TRANSPORT", 00:20:43.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.972 "adrfam": "ipv4", 00:20:43.972 "trsvcid": "$NVMF_PORT", 00:20:43.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.972 "hdgst": ${hdgst:-false}, 00:20:43.972 "ddgst": ${ddgst:-false} 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 } 00:20:43.973 EOF 00:20:43.973 )") 00:20:43.973 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.973 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:43.973 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:43.973 { 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme$subsystem", 00:20:43.973 "trtype": "$TEST_TRANSPORT", 00:20:43.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "$NVMF_PORT", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.973 "hdgst": ${hdgst:-false}, 00:20:43.973 "ddgst": ${ddgst:-false} 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 } 00:20:43.973 EOF 00:20:43.973 )") 00:20:43.973 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:43.973 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:43.973 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:43.973 09:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme1", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 },{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme2", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 },{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme3", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 },{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme4", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 },{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme5", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 },{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme6", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 },{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme7", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 },{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme8", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 },{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme9", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 },{ 00:20:43.973 "params": { 00:20:43.973 "name": "Nvme10", 00:20:43.973 "trtype": "tcp", 00:20:43.973 "traddr": "10.0.0.2", 00:20:43.973 "adrfam": "ipv4", 00:20:43.973 "trsvcid": "4420", 00:20:43.973 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:43.973 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:43.973 "hdgst": false, 00:20:43.973 "ddgst": false 00:20:43.973 }, 00:20:43.973 "method": "bdev_nvme_attach_controller" 00:20:43.973 }' 00:20:43.973 [2024-11-20 09:53:07.191102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.973 [2024-11-20 09:53:07.232730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.874 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.874 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:45.874 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:45.874 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.874 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:45.874 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.874 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2968735 00:20:45.874 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:45.874 09:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:46.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2968735 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:46.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2968426 00:20:46.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:46.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:46.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:46.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:46.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.807 { 00:20:46.807 "params": { 00:20:46.807 "name": "Nvme$subsystem", 00:20:46.807 "trtype": "$TEST_TRANSPORT", 00:20:46.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.807 "adrfam": "ipv4", 00:20:46.807 "trsvcid": "$NVMF_PORT", 00:20:46.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.807 "hdgst": ${hdgst:-false}, 00:20:46.807 "ddgst": ${ddgst:-false} 00:20:46.807 }, 00:20:46.807 "method": "bdev_nvme_attach_controller" 00:20:46.807 } 00:20:46.807 EOF 00:20:46.807 )") 00:20:46.807 09:53:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.807 { 00:20:46.807 "params": { 00:20:46.807 "name": "Nvme$subsystem", 00:20:46.807 "trtype": "$TEST_TRANSPORT", 00:20:46.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.807 "adrfam": "ipv4", 00:20:46.807 "trsvcid": "$NVMF_PORT", 00:20:46.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.807 "hdgst": ${hdgst:-false}, 00:20:46.807 "ddgst": ${ddgst:-false} 00:20:46.807 }, 00:20:46.807 "method": "bdev_nvme_attach_controller" 00:20:46.807 } 00:20:46.807 EOF 00:20:46.807 )") 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.807 { 00:20:46.807 "params": { 00:20:46.807 "name": "Nvme$subsystem", 00:20:46.807 "trtype": "$TEST_TRANSPORT", 00:20:46.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.807 "adrfam": "ipv4", 00:20:46.807 "trsvcid": "$NVMF_PORT", 00:20:46.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.807 "hdgst": ${hdgst:-false}, 00:20:46.807 "ddgst": ${ddgst:-false} 00:20:46.807 }, 00:20:46.807 "method": "bdev_nvme_attach_controller" 00:20:46.807 } 00:20:46.807 EOF 00:20:46.807 )") 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.807 { 00:20:46.807 "params": { 00:20:46.807 "name": "Nvme$subsystem", 00:20:46.807 "trtype": "$TEST_TRANSPORT", 00:20:46.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.807 "adrfam": "ipv4", 00:20:46.807 "trsvcid": "$NVMF_PORT", 00:20:46.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.807 "hdgst": ${hdgst:-false}, 00:20:46.807 "ddgst": ${ddgst:-false} 00:20:46.807 }, 00:20:46.807 "method": "bdev_nvme_attach_controller" 00:20:46.807 } 00:20:46.807 EOF 00:20:46.807 )") 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.807 { 00:20:46.807 "params": { 00:20:46.807 "name": "Nvme$subsystem", 00:20:46.807 "trtype": "$TEST_TRANSPORT", 00:20:46.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.807 "adrfam": "ipv4", 00:20:46.807 "trsvcid": "$NVMF_PORT", 00:20:46.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.807 "hdgst": ${hdgst:-false}, 00:20:46.807 "ddgst": ${ddgst:-false} 00:20:46.807 }, 00:20:46.807 "method": "bdev_nvme_attach_controller" 00:20:46.807 } 00:20:46.807 EOF 00:20:46.807 )") 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.807 { 00:20:46.807 "params": { 00:20:46.807 "name": "Nvme$subsystem", 00:20:46.807 "trtype": "$TEST_TRANSPORT", 00:20:46.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.807 "adrfam": "ipv4", 00:20:46.807 "trsvcid": "$NVMF_PORT", 00:20:46.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.807 "hdgst": ${hdgst:-false}, 00:20:46.807 "ddgst": ${ddgst:-false} 00:20:46.807 }, 00:20:46.807 "method": "bdev_nvme_attach_controller" 00:20:46.807 } 00:20:46.807 EOF 00:20:46.807 )") 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.807 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.808 { 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme$subsystem", 00:20:46.808 "trtype": "$TEST_TRANSPORT", 00:20:46.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "$NVMF_PORT", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.808 "hdgst": ${hdgst:-false}, 00:20:46.808 "ddgst": ${ddgst:-false} 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 } 00:20:46.808 EOF 00:20:46.808 )") 00:20:46.808 [2024-11-20 09:53:10.042774] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:20:46.808 [2024-11-20 09:53:10.042825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969226 ] 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.808 { 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme$subsystem", 00:20:46.808 "trtype": "$TEST_TRANSPORT", 00:20:46.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "$NVMF_PORT", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.808 "hdgst": ${hdgst:-false}, 00:20:46.808 "ddgst": ${ddgst:-false} 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 } 00:20:46.808 EOF 00:20:46.808 )") 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.808 { 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme$subsystem", 00:20:46.808 "trtype": "$TEST_TRANSPORT", 00:20:46.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "$NVMF_PORT", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.808 "hdgst": ${hdgst:-false}, 00:20:46.808 "ddgst": ${ddgst:-false} 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 } 00:20:46.808 EOF 00:20:46.808 )") 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.808 { 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme$subsystem", 00:20:46.808 "trtype": "$TEST_TRANSPORT", 00:20:46.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "$NVMF_PORT", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.808 "hdgst": ${hdgst:-false}, 00:20:46.808 "ddgst": ${ddgst:-false} 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 } 00:20:46.808 EOF 00:20:46.808 )") 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:46.808 09:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme1", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 },{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme2", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 },{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme3", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 },{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme4", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 },{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme5", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 },{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme6", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 },{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme7", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 },{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme8", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 },{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme9", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 },{ 00:20:46.808 "params": { 00:20:46.808 "name": "Nvme10", 00:20:46.808 "trtype": "tcp", 00:20:46.808 "traddr": "10.0.0.2", 00:20:46.808 "adrfam": "ipv4", 00:20:46.808 "trsvcid": "4420", 00:20:46.808 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:46.808 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:46.808 "hdgst": false, 00:20:46.808 "ddgst": false 00:20:46.808 }, 00:20:46.808 "method": "bdev_nvme_attach_controller" 00:20:46.808 }' 00:20:46.808 [2024-11-20 09:53:10.121736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.066 [2024-11-20 09:53:10.164188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.442 Running I/O for 1 seconds... 00:20:49.636 2194.00 IOPS, 137.12 MiB/s 00:20:49.636 Latency(us) 00:20:49.636 [2024-11-20T08:53:12.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.636 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme1n1 : 1.14 280.08 17.50 0.00 0.00 226459.29 16298.52 221568.67 00:20:49.636 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme2n1 : 1.06 246.40 15.40 0.00 0.00 252552.71 3533.25 235245.75 00:20:49.636 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme3n1 : 1.13 286.29 17.89 0.00 0.00 211422.20 16526.47 211538.81 00:20:49.636 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme4n1 : 1.13 282.31 17.64 0.00 0.00 214149.92 14816.83 218833.25 00:20:49.636 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme5n1 : 1.11 231.44 14.47 0.00 0.00 258121.68 19489.84 231598.53 00:20:49.636 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme6n1 : 1.16 276.90 17.31 0.00 0.00 212545.98 6069.20 223392.28 00:20:49.636 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme7n1 : 1.15 279.20 17.45 0.00 0.00 207996.84 16526.47 219745.06 00:20:49.636 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme8n1 : 1.15 282.42 17.65 0.00 0.00 202417.93 2350.75 232510.33 00:20:49.636 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme9n1 : 1.16 276.19 17.26 0.00 0.00 204250.25 15614.66 222480.47 00:20:49.636 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:49.636 Verification LBA range: start 0x0 length 0x400 00:20:49.636 Nvme10n1 : 1.20 266.11 16.63 0.00 0.00 202180.88 13962.02 238892.97 00:20:49.636 [2024-11-20T08:53:12.968Z] =================================================================================================================== 00:20:49.636 [2024-11-20T08:53:12.968Z] Total : 2707.34 169.21 0.00 0.00 217728.01 2350.75 238892.97 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.895 rmmod nvme_tcp 00:20:49.895 rmmod nvme_fabrics 00:20:49.895 rmmod nvme_keyring 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2968426 ']' 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2968426 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2968426 ']' 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2968426 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2968426 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2968426' 00:20:49.895 killing process with pid 2968426 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2968426 00:20:49.895 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2968426 00:20:50.462 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.462 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.462 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.462 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:50.462 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:50.463 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.463 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.463 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.463 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.463 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.463 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.463 09:53:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.369 00:20:52.369 real 0m16.003s 00:20:52.369 user 0m37.087s 00:20:52.369 sys 0m5.810s 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:52.369 ************************************ 00:20:52.369 END TEST nvmf_shutdown_tc1 00:20:52.369 ************************************ 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:52.369 ************************************ 00:20:52.369 START TEST nvmf_shutdown_tc2 00:20:52.369 ************************************ 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:52.369 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:52.369 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:52.369 Found net devices under 0000:86:00.0: cvl_0_0 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.369 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:52.369 Found net devices under 0000:86:00.1: cvl_0_1 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.370 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:20:52.629 00:20:52.629 --- 10.0.0.2 ping statistics --- 00:20:52.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.629 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:20:52.629 00:20:52.629 --- 10.0.0.1 ping statistics --- 00:20:52.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.629 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.629 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2970253 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2970253 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2970253 ']' 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.888 09:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.888 [2024-11-20 09:53:16.031628] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:20:52.888 [2024-11-20 09:53:16.031680] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.888 [2024-11-20 09:53:16.110308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.888 [2024-11-20 09:53:16.153815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.888 [2024-11-20 09:53:16.153855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.888 [2024-11-20 09:53:16.153862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.888 [2024-11-20 09:53:16.153869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.888 [2024-11-20 09:53:16.153874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.888 [2024-11-20 09:53:16.155481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.888 [2024-11-20 09:53:16.155590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.888 [2024-11-20 09:53:16.155694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.888 [2024-11-20 09:53:16.155696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.148 [2024-11-20 09:53:16.297075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.148 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.148 Malloc1 00:20:53.148 [2024-11-20 09:53:16.401447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.148 Malloc2 00:20:53.148 Malloc3 00:20:53.408 Malloc4 00:20:53.408 Malloc5 00:20:53.408 Malloc6 00:20:53.408 Malloc7 00:20:53.408 Malloc8 00:20:53.408 Malloc9 00:20:53.670 Malloc10 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2970385 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2970385 /var/tmp/bdevperf.sock 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2970385 ']' 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.670 { 00:20:53.670 "params": { 00:20:53.670 "name": "Nvme$subsystem", 00:20:53.670 "trtype": "$TEST_TRANSPORT", 00:20:53.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.670 "adrfam": "ipv4", 00:20:53.670 "trsvcid": "$NVMF_PORT", 00:20:53.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.670 "hdgst": ${hdgst:-false}, 00:20:53.670 "ddgst": ${ddgst:-false} 00:20:53.670 }, 00:20:53.670 "method": "bdev_nvme_attach_controller" 00:20:53.670 } 00:20:53.670 EOF 00:20:53.670 )") 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.670 { 00:20:53.670 "params": { 00:20:53.670 "name": "Nvme$subsystem", 00:20:53.670 "trtype": "$TEST_TRANSPORT", 00:20:53.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.670 "adrfam": "ipv4", 00:20:53.670 "trsvcid": "$NVMF_PORT", 00:20:53.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.670 "hdgst": ${hdgst:-false}, 00:20:53.670 "ddgst": ${ddgst:-false} 00:20:53.670 }, 00:20:53.670 "method": "bdev_nvme_attach_controller" 00:20:53.670 } 00:20:53.670 EOF 00:20:53.670 )") 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.670 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.670 { 00:20:53.670 "params": { 00:20:53.670 "name": "Nvme$subsystem", 00:20:53.670 "trtype": "$TEST_TRANSPORT", 00:20:53.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.671 "adrfam": "ipv4", 00:20:53.671 "trsvcid": "$NVMF_PORT", 00:20:53.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.671 "hdgst": ${hdgst:-false}, 00:20:53.671 "ddgst": ${ddgst:-false} 00:20:53.671 }, 00:20:53.671 "method": "bdev_nvme_attach_controller" 00:20:53.671 } 00:20:53.671 EOF 00:20:53.671 )") 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.671 { 00:20:53.671 "params": { 00:20:53.671 "name": "Nvme$subsystem", 00:20:53.671 "trtype": "$TEST_TRANSPORT", 00:20:53.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.671 "adrfam": "ipv4", 00:20:53.671 "trsvcid": "$NVMF_PORT", 00:20:53.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.671 "hdgst": ${hdgst:-false}, 00:20:53.671 "ddgst": ${ddgst:-false} 00:20:53.671 }, 00:20:53.671 "method": "bdev_nvme_attach_controller" 00:20:53.671 } 00:20:53.671 EOF 00:20:53.671 )") 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.671 { 00:20:53.671 "params": { 00:20:53.671 "name": "Nvme$subsystem", 00:20:53.671 "trtype": "$TEST_TRANSPORT", 00:20:53.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.671 "adrfam": "ipv4", 00:20:53.671 "trsvcid": "$NVMF_PORT", 00:20:53.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.671 "hdgst": ${hdgst:-false}, 00:20:53.671 "ddgst": ${ddgst:-false} 00:20:53.671 }, 00:20:53.671 "method": "bdev_nvme_attach_controller" 00:20:53.671 } 00:20:53.671 EOF 00:20:53.671 )") 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.671 { 00:20:53.671 "params": { 00:20:53.671 "name": "Nvme$subsystem", 00:20:53.671 "trtype": "$TEST_TRANSPORT", 00:20:53.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.671 "adrfam": "ipv4", 00:20:53.671 "trsvcid": "$NVMF_PORT", 00:20:53.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.671 "hdgst": ${hdgst:-false}, 00:20:53.671 "ddgst": ${ddgst:-false} 00:20:53.671 }, 00:20:53.671 "method": "bdev_nvme_attach_controller" 00:20:53.671 } 00:20:53.671 EOF 00:20:53.671 )") 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.671 { 00:20:53.671 "params": { 00:20:53.671 "name": "Nvme$subsystem", 00:20:53.671 "trtype": "$TEST_TRANSPORT", 00:20:53.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.671 "adrfam": "ipv4", 00:20:53.671 "trsvcid": "$NVMF_PORT", 00:20:53.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.671 "hdgst": ${hdgst:-false}, 00:20:53.671 "ddgst": ${ddgst:-false} 00:20:53.671 }, 00:20:53.671 "method": "bdev_nvme_attach_controller" 00:20:53.671 } 00:20:53.671 EOF 00:20:53.671 )") 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.671 [2024-11-20 09:53:16.876616] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:20:53.671 [2024-11-20 09:53:16.876666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2970385 ] 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.671 { 00:20:53.671 "params": { 00:20:53.671 "name": "Nvme$subsystem", 00:20:53.671 "trtype": "$TEST_TRANSPORT", 00:20:53.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.671 "adrfam": "ipv4", 00:20:53.671 "trsvcid": "$NVMF_PORT", 00:20:53.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.671 "hdgst": ${hdgst:-false}, 00:20:53.671 "ddgst": ${ddgst:-false} 00:20:53.671 }, 00:20:53.671 "method": "bdev_nvme_attach_controller" 00:20:53.671 } 00:20:53.671 EOF 00:20:53.671 )") 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.671 { 00:20:53.671 "params": { 00:20:53.671 "name": "Nvme$subsystem", 00:20:53.671 "trtype": "$TEST_TRANSPORT", 00:20:53.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.671 "adrfam": "ipv4", 00:20:53.671 "trsvcid": "$NVMF_PORT", 00:20:53.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.671 "hdgst": ${hdgst:-false}, 00:20:53.671 "ddgst": ${ddgst:-false} 00:20:53.671 }, 00:20:53.671 "method": "bdev_nvme_attach_controller" 00:20:53.671 } 00:20:53.671 EOF 00:20:53.671 )") 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.671 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.671 { 00:20:53.671 "params": { 00:20:53.671 "name": "Nvme$subsystem", 00:20:53.671 "trtype": "$TEST_TRANSPORT", 00:20:53.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.671 "adrfam": "ipv4", 00:20:53.671 "trsvcid": "$NVMF_PORT", 00:20:53.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.671 "hdgst": ${hdgst:-false}, 00:20:53.671 "ddgst": ${ddgst:-false} 00:20:53.671 }, 00:20:53.671 "method": "bdev_nvme_attach_controller" 00:20:53.671 } 00:20:53.672 EOF 00:20:53.672 )") 00:20:53.672 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:53.672 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:53.672 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:53.672 09:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme1", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 },{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme2", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 },{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme3", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 },{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme4", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 },{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme5", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 },{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme6", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 },{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme7", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 },{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme8", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 },{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme9", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 },{ 00:20:53.672 "params": { 00:20:53.672 "name": "Nvme10", 00:20:53.672 "trtype": "tcp", 00:20:53.672 "traddr": "10.0.0.2", 00:20:53.672 "adrfam": "ipv4", 00:20:53.672 "trsvcid": "4420", 00:20:53.672 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:53.672 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:53.672 "hdgst": false, 00:20:53.672 "ddgst": false 00:20:53.672 }, 00:20:53.672 "method": "bdev_nvme_attach_controller" 00:20:53.672 }' 00:20:53.672 [2024-11-20 09:53:16.955479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.672 [2024-11-20 09:53:16.997227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.751 Running I/O for 10 seconds... 00:20:55.751 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.751 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:55.752 09:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:55.752 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:55.752 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.752 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.752 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.752 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.752 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.016 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.016 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:56.016 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:56.016 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2970385 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2970385 ']' 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2970385 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2970385 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.281 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2970385' 00:20:56.281 killing process with pid 2970385 00:20:56.282 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2970385 00:20:56.282 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2970385 00:20:56.282 Received shutdown signal, test time was about 0.961872 seconds 00:20:56.282 00:20:56.282 Latency(us) 00:20:56.282 [2024-11-20T08:53:19.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.282 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme1n1 : 0.94 270.98 16.94 0.00 0.00 233522.53 13278.16 220656.86 00:20:56.282 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme2n1 : 0.95 268.39 16.77 0.00 0.00 231308.47 18350.08 218833.25 00:20:56.282 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme3n1 : 0.95 268.18 16.76 0.00 0.00 227654.12 16754.42 203332.56 00:20:56.282 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme4n1 : 0.96 332.92 20.81 0.00 0.00 178797.52 14246.96 216097.84 00:20:56.282 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme5n1 : 0.95 269.55 16.85 0.00 0.00 218569.24 16754.42 221568.67 00:20:56.282 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme6n1 : 0.93 280.53 17.53 0.00 0.00 205325.83 3761.20 218833.25 00:20:56.282 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme7n1 : 0.94 289.78 18.11 0.00 0.00 193596.00 4786.98 219745.06 00:20:56.282 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme8n1 : 0.95 274.55 17.16 0.00 0.00 202015.27 4046.14 193302.71 00:20:56.282 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme9n1 : 0.96 266.53 16.66 0.00 0.00 205627.66 17438.27 238892.97 00:20:56.282 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.282 Verification LBA range: start 0x0 length 0x400 00:20:56.282 Nvme10n1 : 0.92 208.53 13.03 0.00 0.00 255799.06 18805.98 235245.75 00:20:56.282 [2024-11-20T08:53:19.614Z] =================================================================================================================== 00:20:56.282 [2024-11-20T08:53:19.614Z] Total : 2729.93 170.62 0.00 0.00 213146.26 3761.20 238892.97 00:20:56.541 09:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2970253 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.478 rmmod nvme_tcp 00:20:57.478 rmmod nvme_fabrics 00:20:57.478 rmmod nvme_keyring 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2970253 ']' 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2970253 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2970253 ']' 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2970253 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.478 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2970253 00:20:57.737 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.737 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.737 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2970253' 00:20:57.737 killing process with pid 2970253 00:20:57.737 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2970253 00:20:57.737 09:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2970253 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.997 09:53:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.535 00:21:00.535 real 0m7.628s 00:21:00.535 user 0m22.870s 00:21:00.535 sys 0m1.430s 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:00.535 ************************************ 00:21:00.535 END TEST nvmf_shutdown_tc2 00:21:00.535 ************************************ 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:00.535 ************************************ 00:21:00.535 START TEST nvmf_shutdown_tc3 00:21:00.535 ************************************ 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.535 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:00.536 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:00.536 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:00.536 Found net devices under 0000:86:00.0: cvl_0_0 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:00.536 Found net devices under 0000:86:00.1: cvl_0_1 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:21:00.536 00:21:00.536 --- 10.0.0.2 ping statistics --- 00:21:00.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.536 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:21:00.536 00:21:00.536 --- 10.0.0.1 ping statistics --- 00:21:00.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.536 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2971585 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2971585 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2971585 ']' 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.536 09:53:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.536 [2024-11-20 09:53:23.731640] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:21:00.536 [2024-11-20 09:53:23.731690] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.536 [2024-11-20 09:53:23.815611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.536 [2024-11-20 09:53:23.859805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.536 [2024-11-20 09:53:23.859837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.536 [2024-11-20 09:53:23.859845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.536 [2024-11-20 09:53:23.859851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.536 [2024-11-20 09:53:23.859856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.536 [2024-11-20 09:53:23.861504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.536 [2024-11-20 09:53:23.861611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.536 [2024-11-20 09:53:23.861739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.536 [2024-11-20 09:53:23.861740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.472 [2024-11-20 09:53:24.612712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.472 09:53:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.472 Malloc1 00:21:01.472 [2024-11-20 09:53:24.717225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.472 Malloc2 00:21:01.472 Malloc3 00:21:01.731 Malloc4 00:21:01.731 Malloc5 00:21:01.731 Malloc6 00:21:01.731 Malloc7 00:21:01.731 Malloc8 00:21:01.731 Malloc9 00:21:01.991 Malloc10 00:21:01.991 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.991 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:01.991 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2971873 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2971873 /var/tmp/bdevperf.sock 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2971873 ']' 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.992 { 00:21:01.992 "params": { 00:21:01.992 "name": "Nvme$subsystem", 00:21:01.992 "trtype": "$TEST_TRANSPORT", 00:21:01.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.992 "adrfam": "ipv4", 00:21:01.992 "trsvcid": "$NVMF_PORT", 00:21:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.992 "hdgst": ${hdgst:-false}, 00:21:01.992 "ddgst": ${ddgst:-false} 00:21:01.992 }, 00:21:01.992 "method": "bdev_nvme_attach_controller" 00:21:01.992 } 00:21:01.992 EOF 00:21:01.992 )") 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.992 { 00:21:01.992 "params": { 00:21:01.992 "name": "Nvme$subsystem", 00:21:01.992 "trtype": "$TEST_TRANSPORT", 00:21:01.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.992 "adrfam": "ipv4", 00:21:01.992 "trsvcid": "$NVMF_PORT", 00:21:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.992 "hdgst": ${hdgst:-false}, 00:21:01.992 "ddgst": ${ddgst:-false} 00:21:01.992 }, 00:21:01.992 "method": "bdev_nvme_attach_controller" 00:21:01.992 } 00:21:01.992 EOF 00:21:01.992 )") 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.992 { 00:21:01.992 "params": { 00:21:01.992 "name": "Nvme$subsystem", 00:21:01.992 "trtype": "$TEST_TRANSPORT", 00:21:01.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.992 "adrfam": "ipv4", 00:21:01.992 "trsvcid": "$NVMF_PORT", 00:21:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.992 "hdgst": ${hdgst:-false}, 00:21:01.992 "ddgst": ${ddgst:-false} 00:21:01.992 }, 00:21:01.992 "method": "bdev_nvme_attach_controller" 00:21:01.992 } 00:21:01.992 EOF 00:21:01.992 )") 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.992 { 00:21:01.992 "params": { 00:21:01.992 "name": "Nvme$subsystem", 00:21:01.992 "trtype": "$TEST_TRANSPORT", 00:21:01.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.992 "adrfam": "ipv4", 00:21:01.992 "trsvcid": "$NVMF_PORT", 00:21:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.992 "hdgst": ${hdgst:-false}, 00:21:01.992 "ddgst": ${ddgst:-false} 00:21:01.992 }, 00:21:01.992 "method": "bdev_nvme_attach_controller" 00:21:01.992 } 00:21:01.992 EOF 00:21:01.992 )") 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.992 { 00:21:01.992 "params": { 00:21:01.992 "name": "Nvme$subsystem", 00:21:01.992 "trtype": "$TEST_TRANSPORT", 00:21:01.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.992 "adrfam": "ipv4", 00:21:01.992 "trsvcid": "$NVMF_PORT", 00:21:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.992 "hdgst": ${hdgst:-false}, 00:21:01.992 "ddgst": ${ddgst:-false} 00:21:01.992 }, 00:21:01.992 "method": "bdev_nvme_attach_controller" 00:21:01.992 } 00:21:01.992 EOF 00:21:01.992 )") 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.992 { 00:21:01.992 "params": { 00:21:01.992 "name": "Nvme$subsystem", 00:21:01.992 "trtype": "$TEST_TRANSPORT", 00:21:01.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.992 "adrfam": "ipv4", 00:21:01.992 "trsvcid": "$NVMF_PORT", 00:21:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.992 "hdgst": ${hdgst:-false}, 00:21:01.992 "ddgst": ${ddgst:-false} 00:21:01.992 }, 00:21:01.992 "method": "bdev_nvme_attach_controller" 00:21:01.992 } 00:21:01.992 EOF 00:21:01.992 )") 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.992 { 00:21:01.992 "params": { 00:21:01.992 "name": "Nvme$subsystem", 00:21:01.992 "trtype": "$TEST_TRANSPORT", 00:21:01.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.992 "adrfam": "ipv4", 00:21:01.992 "trsvcid": "$NVMF_PORT", 00:21:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.992 "hdgst": ${hdgst:-false}, 00:21:01.992 "ddgst": ${ddgst:-false} 00:21:01.992 }, 00:21:01.992 "method": "bdev_nvme_attach_controller" 00:21:01.992 } 00:21:01.992 EOF 00:21:01.992 )") 00:21:01.992 [2024-11-20 09:53:25.190614] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:21:01.992 [2024-11-20 09:53:25.190661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971873 ] 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.992 { 00:21:01.992 "params": { 00:21:01.992 "name": "Nvme$subsystem", 00:21:01.992 "trtype": "$TEST_TRANSPORT", 00:21:01.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.992 "adrfam": "ipv4", 00:21:01.992 "trsvcid": "$NVMF_PORT", 00:21:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.992 "hdgst": ${hdgst:-false}, 00:21:01.992 "ddgst": ${ddgst:-false} 00:21:01.992 }, 00:21:01.992 "method": "bdev_nvme_attach_controller" 00:21:01.992 } 00:21:01.992 EOF 00:21:01.992 )") 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.992 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.992 { 00:21:01.992 "params": { 00:21:01.992 "name": "Nvme$subsystem", 00:21:01.992 "trtype": "$TEST_TRANSPORT", 00:21:01.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.992 "adrfam": "ipv4", 00:21:01.992 "trsvcid": "$NVMF_PORT", 00:21:01.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.993 "hdgst": ${hdgst:-false}, 00:21:01.993 "ddgst": ${ddgst:-false} 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 } 00:21:01.993 EOF 00:21:01.993 )") 00:21:01.993 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.993 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.993 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.993 { 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme$subsystem", 00:21:01.993 "trtype": "$TEST_TRANSPORT", 00:21:01.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "$NVMF_PORT", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.993 "hdgst": ${hdgst:-false}, 00:21:01.993 "ddgst": ${ddgst:-false} 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 } 00:21:01.993 EOF 00:21:01.993 )") 00:21:01.993 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:01.993 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:01.993 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:01.993 09:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme1", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 },{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme2", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 },{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme3", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 },{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme4", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 },{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme5", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 },{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme6", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 },{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme7", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 },{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme8", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 },{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme9", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 },{ 00:21:01.993 "params": { 00:21:01.993 "name": "Nvme10", 00:21:01.993 "trtype": "tcp", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "adrfam": "ipv4", 00:21:01.993 "trsvcid": "4420", 00:21:01.993 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:01.993 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:01.993 "hdgst": false, 00:21:01.993 "ddgst": false 00:21:01.993 }, 00:21:01.993 "method": "bdev_nvme_attach_controller" 00:21:01.993 }' 00:21:01.993 [2024-11-20 09:53:25.270042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.993 [2024-11-20 09:53:25.311545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.898 Running I/O for 10 seconds... 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:03.898 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=85 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 85 -ge 100 ']' 00:21:04.157 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:04.416 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:04.416 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:04.416 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:04.416 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:04.416 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.416 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=200 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 200 -ge 100 ']' 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2971585 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2971585 ']' 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2971585 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2971585 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2971585' 00:21:04.687 killing process with pid 2971585 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2971585 00:21:04.687 09:53:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2971585 00:21:04.687 [2024-11-20 09:53:27.828098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.687 [2024-11-20 09:53:27.828565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.828578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.828585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.828592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.828598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.828604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46700 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.829996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.830151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe290 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.831713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb46bf0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.833999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.834005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.834012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.834019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.834026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.834032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.834038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.688 [2024-11-20 09:53:27.834045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.834305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb475b0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.836604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb482d0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.689 [2024-11-20 09:53:27.837625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 09:53:27.837639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.689 he state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.689 [2024-11-20 09:53:27.837656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.689 [2024-11-20 09:53:27.837663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.689 [2024-11-20 09:53:27.837670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-20 09:53:27.837678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.689 he state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.689 [2024-11-20 09:53:27.837694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.689 [2024-11-20 09:53:27.837701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773c40 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.689 [2024-11-20 09:53:27.837735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.837742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-20 09:53:27.837757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 he state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with t[2024-11-20 09:53:27.837766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:21:04.690 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.837782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with t[2024-11-20 09:53:27.837792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:21:04.690 id:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.837801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcd50 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with t[2024-11-20 09:53:27.837836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:21:04.690 id:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.837847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-20 09:53:27.837862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 he state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with t[2024-11-20 09:53:27.837871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:21:04.690 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.837887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.837902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211610 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.837942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb487c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.837953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.837969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.837983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.837993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a590 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171e300 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17284c0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17277a0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd1b0 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.690 [2024-11-20 09:53:27.838413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.690 [2024-11-20 09:53:27.838419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fac70 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.690 [2024-11-20 09:53:27.838843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48c90 is same with the state(6) to be set 00:21:04.691 [2024-11-20 09:53:27.838944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.838971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.838989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.838998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.839975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.839982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.840006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:04.691 [2024-11-20 09:53:27.840142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.840153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.840165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.840172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.840181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.691 [2024-11-20 09:53:27.840188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.691 [2024-11-20 09:53:27.840198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.840840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.840847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.848952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.848960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1700ed0 is same with the state(6) to be set 00:21:04.692 [2024-11-20 09:53:27.849041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.849050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.849062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.849069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.849078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.849085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.849093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.849100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.849108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.849115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.849124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.849132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.692 [2024-11-20 09:53:27.849142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.692 [2024-11-20 09:53:27.849149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.693 [2024-11-20 09:53:27.849976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.693 [2024-11-20 09:53:27.849985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.849991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.849999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.850006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.850014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.850021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.850030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.850037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.850352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1773c40 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.850378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fcd50 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.850393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211610 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.850406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173a590 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.850432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.694 [2024-11-20 09:53:27.850442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.850450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.694 [2024-11-20 09:53:27.850458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.850466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.694 [2024-11-20 09:53:27.850473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.850483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.694 [2024-11-20 09:53:27.850490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.850497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d140 is same with the state(6) to be set 00:21:04.694 [2024-11-20 09:53:27.850512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171e300 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.850525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17284c0 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.850536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17277a0 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.850553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fd1b0 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.850568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fac70 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.853641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:04.694 [2024-11-20 09:53:27.854145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:04.694 [2024-11-20 09:53:27.854175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:04.694 [2024-11-20 09:53:27.854456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.694 [2024-11-20 09:53:27.854475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fac70 with addr=10.0.0.2, port=4420 00:21:04.694 [2024-11-20 09:53:27.854487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fac70 is same with the state(6) to be set 00:21:04.694 [2024-11-20 09:53:27.855581] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.694 [2024-11-20 09:53:27.855743] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.694 [2024-11-20 09:53:27.855883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.694 [2024-11-20 09:53:27.855903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17284c0 with addr=10.0.0.2, port=4420 00:21:04.694 [2024-11-20 09:53:27.855913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17284c0 is same with the state(6) to be set 00:21:04.694 [2024-11-20 09:53:27.856113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.694 [2024-11-20 09:53:27.856128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171e300 with addr=10.0.0.2, port=4420 00:21:04.694 [2024-11-20 09:53:27.856138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171e300 is same with the state(6) to be set 00:21:04.694 [2024-11-20 09:53:27.856152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fac70 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.856216] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.694 [2024-11-20 09:53:27.856268] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.694 [2024-11-20 09:53:27.856332] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.694 [2024-11-20 09:53:27.856418] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.694 [2024-11-20 09:53:27.856471] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:04.694 [2024-11-20 09:53:27.856500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17284c0 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.856516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171e300 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.856534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:04.694 [2024-11-20 09:53:27.856545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:04.694 [2024-11-20 09:53:27.856556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:04.694 [2024-11-20 09:53:27.856568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:04.694 [2024-11-20 09:53:27.856671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:04.694 [2024-11-20 09:53:27.856682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:04.694 [2024-11-20 09:53:27.856692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:04.694 [2024-11-20 09:53:27.856701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:04.694 [2024-11-20 09:53:27.856710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:04.694 [2024-11-20 09:53:27.856718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:04.694 [2024-11-20 09:53:27.856728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:04.694 [2024-11-20 09:53:27.856736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:04.694 [2024-11-20 09:53:27.860372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176d140 (9): Bad file descriptor 00:21:04.694 [2024-11-20 09:53:27.860525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.860978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.860988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.861000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.861009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.861022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.861031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.861043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.694 [2024-11-20 09:53:27.861053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.694 [2024-11-20 09:53:27.861065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.861883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.861893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15014d0 is same with the state(6) to be set 00:21:04.695 [2024-11-20 09:53:27.863260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.863277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.863291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.863301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.863313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.863322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.863335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.863345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.863356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.863365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.863377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.863389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.863401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.695 [2024-11-20 09:53:27.863410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.695 [2024-11-20 09:53:27.863421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.863989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.863997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.864437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.864444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15026a0 is same with the state(6) to be set 00:21:04.696 [2024-11-20 09:53:27.865458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.865471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.865482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.865489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.865498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.865504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.865514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.865524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.865533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.865540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.865550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.865557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.696 [2024-11-20 09:53:27.865565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.696 [2024-11-20 09:53:27.865573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.865984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.865991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.866380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.866387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ffa10 is same with the state(6) to be set 00:21:04.697 [2024-11-20 09:53:27.867376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.697 [2024-11-20 09:53:27.867757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.697 [2024-11-20 09:53:27.867764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.867992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.867999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.868416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.868423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17039d0 is same with the state(6) to be set 00:21:04.698 [2024-11-20 09:53:27.869434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.698 [2024-11-20 09:53:27.869817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.698 [2024-11-20 09:53:27.869825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.869990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.869997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.870457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.870464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264c410 is same with the state(6) to be set 00:21:04.699 [2024-11-20 09:53:27.871484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.699 [2024-11-20 09:53:27.871903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.699 [2024-11-20 09:53:27.871911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.871918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.871927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.871933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.871942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.871951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.871966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.871973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.871981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.871989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.871998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.872004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.872013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.872020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.872028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.872035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.872044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.872051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.872059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.872065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.872074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.872081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.872089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.877928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.877936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1587f30 is same with the state(6) to be set 00:21:04.700 [2024-11-20 09:53:27.878926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:04.700 [2024-11-20 09:53:27.878943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:04.700 [2024-11-20 09:53:27.878955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:04.700 [2024-11-20 09:53:27.878966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:04.700 [2024-11-20 09:53:27.879052] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:04.700 [2024-11-20 09:53:27.879071] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:04.700 [2024-11-20 09:53:27.879144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:04.700 [2024-11-20 09:53:27.879157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:04.700 [2024-11-20 09:53:27.879370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.700 [2024-11-20 09:53:27.879385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fd1b0 with addr=10.0.0.2, port=4420 00:21:04.700 [2024-11-20 09:53:27.879394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd1b0 is same with the state(6) to be set 00:21:04.700 [2024-11-20 09:53:27.879615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.700 [2024-11-20 09:53:27.879627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fcd50 with addr=10.0.0.2, port=4420 00:21:04.700 [2024-11-20 09:53:27.879634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcd50 is same with the state(6) to be set 00:21:04.700 [2024-11-20 09:53:27.879850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.700 [2024-11-20 09:53:27.879860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17277a0 with addr=10.0.0.2, port=4420 00:21:04.700 [2024-11-20 09:53:27.879868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17277a0 is same with the state(6) to be set 00:21:04.700 [2024-11-20 09:53:27.880087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.700 [2024-11-20 09:53:27.880098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211610 with addr=10.0.0.2, port=4420 00:21:04.700 [2024-11-20 09:53:27.880106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211610 is same with the state(6) to be set 00:21:04.700 [2024-11-20 09:53:27.881277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.881294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.700 [2024-11-20 09:53:27.881307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.700 [2024-11-20 09:53:27.881314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.881989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.881997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.882004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.701 [2024-11-20 09:53:27.882012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.701 [2024-11-20 09:53:27.882019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.702 [2024-11-20 09:53:27.882285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.702 [2024-11-20 09:53:27.882294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1586a60 is same with the state(6) to be set 00:21:04.702 [2024-11-20 09:53:27.883500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:04.702 [2024-11-20 09:53:27.883518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:04.702 [2024-11-20 09:53:27.883528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:04.702 task offset: 33408 on job bdev=Nvme3n1 fails 00:21:04.702 00:21:04.702 Latency(us) 00:21:04.702 [2024-11-20T08:53:28.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.702 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme1n1 ended in about 0.94 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme1n1 : 0.94 210.26 13.14 68.31 0.00 227330.01 11226.60 219745.06 00:21:04.702 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme2n1 ended in about 0.94 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme2n1 : 0.94 208.65 13.04 68.13 0.00 224867.17 15272.74 219745.06 00:21:04.702 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme3n1 ended in about 0.93 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme3n1 : 0.93 276.58 17.29 69.14 0.00 176746.76 12822.26 208803.39 00:21:04.702 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme4n1 ended in about 0.94 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme4n1 : 0.94 210.35 13.15 61.62 0.00 220649.07 15272.74 206979.78 00:21:04.702 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme5n1 ended in about 0.93 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme5n1 : 0.93 207.22 12.95 69.07 0.00 213247.11 14816.83 232510.33 00:21:04.702 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme6n1 ended in about 0.93 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme6n1 : 0.93 207.02 12.94 69.01 0.00 209508.40 15272.74 229774.91 00:21:04.702 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme7n1 ended in about 0.94 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme7n1 : 0.94 203.53 12.72 67.84 0.00 209496.60 21541.40 235245.75 00:21:04.702 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme8n1 ended in about 0.95 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme8n1 : 0.95 203.10 12.69 67.70 0.00 206004.87 14075.99 202420.76 00:21:04.702 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme9n1 ended in about 0.96 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme9n1 : 0.96 204.77 12.80 66.86 0.00 201844.30 16754.42 231598.53 00:21:04.702 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:04.702 Job: Nvme10n1 ended in about 0.95 seconds with error 00:21:04.702 Verification LBA range: start 0x0 length 0x400 00:21:04.702 Nvme10n1 : 0.95 134.34 8.40 67.17 0.00 266665.33 18805.98 249834.63 00:21:04.702 [2024-11-20T08:53:28.034Z] =================================================================================================================== 00:21:04.702 [2024-11-20T08:53:28.034Z] Total : 2065.82 129.11 674.86 0.00 213414.99 11226.60 249834.63 00:21:04.702 [2024-11-20 09:53:27.915702] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:04.702 [2024-11-20 09:53:27.915757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:04.702 [2024-11-20 09:53:27.916098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.702 [2024-11-20 09:53:27.916117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173a590 with addr=10.0.0.2, port=4420 00:21:04.702 [2024-11-20 09:53:27.916129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a590 is same with the state(6) to be set 00:21:04.702 [2024-11-20 09:53:27.916346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.702 [2024-11-20 09:53:27.916358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1773c40 with addr=10.0.0.2, port=4420 00:21:04.702 [2024-11-20 09:53:27.916366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1773c40 is same with the state(6) to be set 00:21:04.702 [2024-11-20 09:53:27.916380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fd1b0 (9): Bad file descriptor 00:21:04.702 [2024-11-20 09:53:27.916392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fcd50 (9): Bad file descriptor 00:21:04.702 [2024-11-20 09:53:27.916402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17277a0 (9): Bad file descriptor 00:21:04.702 [2024-11-20 09:53:27.916412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211610 (9): Bad file descriptor 00:21:04.702 [2024-11-20 09:53:27.916762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.702 [2024-11-20 09:53:27.916784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fac70 with addr=10.0.0.2, port=4420 00:21:04.702 [2024-11-20 09:53:27.916793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fac70 is same with the state(6) to be set 00:21:04.702 [2024-11-20 09:53:27.916980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.702 [2024-11-20 09:53:27.916992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171e300 with addr=10.0.0.2, port=4420 00:21:04.702 [2024-11-20 09:53:27.917001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171e300 is same with the state(6) to be set 00:21:04.702 [2024-11-20 09:53:27.917146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.702 [2024-11-20 09:53:27.917158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17284c0 with addr=10.0.0.2, port=4420 00:21:04.702 [2024-11-20 09:53:27.917166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17284c0 is same with the state(6) to be set 00:21:04.702 [2024-11-20 09:53:27.917305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.702 [2024-11-20 09:53:27.917316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x176d140 with addr=10.0.0.2, port=4420 00:21:04.702 [2024-11-20 09:53:27.917324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d140 is same with the state(6) to be set 00:21:04.702 [2024-11-20 09:53:27.917334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173a590 (9): Bad file descriptor 00:21:04.702 [2024-11-20 09:53:27.917343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1773c40 (9): Bad file descriptor 00:21:04.702 [2024-11-20 09:53:27.917352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:04.702 [2024-11-20 09:53:27.917359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:04.702 [2024-11-20 09:53:27.917368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:04.702 [2024-11-20 09:53:27.917378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:04.702 [2024-11-20 09:53:27.917387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:04.702 [2024-11-20 09:53:27.917393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:04.702 [2024-11-20 09:53:27.917401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:04.702 [2024-11-20 09:53:27.917407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:04.702 [2024-11-20 09:53:27.917414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:04.702 [2024-11-20 09:53:27.917422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:04.702 [2024-11-20 09:53:27.917429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:04.702 [2024-11-20 09:53:27.917435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:04.702 [2024-11-20 09:53:27.917443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:04.702 [2024-11-20 09:53:27.917448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:04.702 [2024-11-20 09:53:27.917456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:04.702 [2024-11-20 09:53:27.917463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:04.702 [2024-11-20 09:53:27.917513] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:04.702 [2024-11-20 09:53:27.917526] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:04.703 [2024-11-20 09:53:27.917852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fac70 (9): Bad file descriptor 00:21:04.703 [2024-11-20 09:53:27.917866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171e300 (9): Bad file descriptor 00:21:04.703 [2024-11-20 09:53:27.917875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17284c0 (9): Bad file descriptor 00:21:04.703 [2024-11-20 09:53:27.917884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176d140 (9): Bad file descriptor 00:21:04.703 [2024-11-20 09:53:27.917892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.917899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.917905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.917912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:04.703 [2024-11-20 09:53:27.917919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.917926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.917932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.917938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:04.703 [2024-11-20 09:53:27.917977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:04.703 [2024-11-20 09:53:27.917989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:04.703 [2024-11-20 09:53:27.917998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:04.703 [2024-11-20 09:53:27.918006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:04.703 [2024-11-20 09:53:27.918033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.918041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.918047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.918053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:04.703 [2024-11-20 09:53:27.918062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.918068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.918075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.918081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:04.703 [2024-11-20 09:53:27.918088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.918095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.918106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.918112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:04.703 [2024-11-20 09:53:27.918118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.918125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.918132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.918138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:04.703 [2024-11-20 09:53:27.918324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.703 [2024-11-20 09:53:27.918339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1211610 with addr=10.0.0.2, port=4420 00:21:04.703 [2024-11-20 09:53:27.918347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1211610 is same with the state(6) to be set 00:21:04.703 [2024-11-20 09:53:27.918546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.703 [2024-11-20 09:53:27.918557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17277a0 with addr=10.0.0.2, port=4420 00:21:04.703 [2024-11-20 09:53:27.918565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17277a0 is same with the state(6) to be set 00:21:04.703 [2024-11-20 09:53:27.918701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.703 [2024-11-20 09:53:27.918712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fcd50 with addr=10.0.0.2, port=4420 00:21:04.703 [2024-11-20 09:53:27.918719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcd50 is same with the state(6) to be set 00:21:04.703 [2024-11-20 09:53:27.918851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.703 [2024-11-20 09:53:27.918861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fd1b0 with addr=10.0.0.2, port=4420 00:21:04.703 [2024-11-20 09:53:27.918868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd1b0 is same with the state(6) to be set 00:21:04.703 [2024-11-20 09:53:27.918896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1211610 (9): Bad file descriptor 00:21:04.703 [2024-11-20 09:53:27.918907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17277a0 (9): Bad file descriptor 00:21:04.703 [2024-11-20 09:53:27.918916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fcd50 (9): Bad file descriptor 00:21:04.703 [2024-11-20 09:53:27.918925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fd1b0 (9): Bad file descriptor 00:21:04.703 [2024-11-20 09:53:27.918953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.918962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.918970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.918976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:04.703 [2024-11-20 09:53:27.918984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.918991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.918998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.919004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:04.703 [2024-11-20 09:53:27.919014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.919020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.919027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.919033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:04.703 [2024-11-20 09:53:27.919039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:04.703 [2024-11-20 09:53:27.919045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:04.703 [2024-11-20 09:53:27.919052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:04.703 [2024-11-20 09:53:27.919058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:04.962 09:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:05.901 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2971873 00:21:05.901 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:05.901 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2971873 00:21:05.901 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:06.160 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.160 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:06.160 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.160 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2971873 00:21:06.160 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:06.160 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:06.161 rmmod nvme_tcp 00:21:06.161 rmmod nvme_fabrics 00:21:06.161 rmmod nvme_keyring 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2971585 ']' 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2971585 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2971585 ']' 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2971585 00:21:06.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2971585) - No such process 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2971585 is not found' 00:21:06.161 Process with pid 2971585 is not found 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.161 09:53:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.064 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:08.064 00:21:08.064 real 0m8.021s 00:21:08.064 user 0m20.295s 00:21:08.064 sys 0m1.376s 00:21:08.064 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.064 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.064 ************************************ 00:21:08.064 END TEST nvmf_shutdown_tc3 00:21:08.064 ************************************ 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:08.323 ************************************ 00:21:08.323 START TEST nvmf_shutdown_tc4 00:21:08.323 ************************************ 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:08.323 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:08.324 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:08.324 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:08.324 Found net devices under 0000:86:00.0: cvl_0_0 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:08.324 Found net devices under 0000:86:00.1: cvl_0_1 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:08.324 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:08.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:21:08.583 00:21:08.583 --- 10.0.0.2 ping statistics --- 00:21:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.583 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:08.583 00:21:08.583 --- 10.0.0.1 ping statistics --- 00:21:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.583 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2973133 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2973133 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2973133 ']' 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.583 09:53:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:08.583 [2024-11-20 09:53:31.849347] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:21:08.583 [2024-11-20 09:53:31.849394] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.842 [2024-11-20 09:53:31.930810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.842 [2024-11-20 09:53:31.974189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.842 [2024-11-20 09:53:31.974229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.842 [2024-11-20 09:53:31.974237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.842 [2024-11-20 09:53:31.974243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.842 [2024-11-20 09:53:31.974248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.842 [2024-11-20 09:53:31.975893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.842 [2024-11-20 09:53:31.976004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.842 [2024-11-20 09:53:31.976112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.842 [2024-11-20 09:53:31.976113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.409 [2024-11-20 09:53:32.731773] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.409 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.669 09:53:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.669 Malloc1 00:21:09.669 [2024-11-20 09:53:32.849900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.669 Malloc2 00:21:09.669 Malloc3 00:21:09.669 Malloc4 00:21:09.669 Malloc5 00:21:09.929 Malloc6 00:21:09.929 Malloc7 00:21:09.929 Malloc8 00:21:09.929 Malloc9 00:21:09.929 Malloc10 00:21:09.929 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.929 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:09.929 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:09.929 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:10.188 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2973416 00:21:10.189 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:10.189 09:53:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:10.189 [2024-11-20 09:53:33.356644] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2973133 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2973133 ']' 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2973133 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2973133 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2973133' 00:21:15.472 killing process with pid 2973133 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2973133 00:21:15.472 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2973133 00:21:15.472 [2024-11-20 09:53:38.348458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3dca0 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.348964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 [2024-11-20 09:53:38.349091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3e170 is same with the state(6) to be set 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 [2024-11-20 09:53:38.352733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.472 starting I/O failed: -6 00:21:15.472 starting I/O failed: -6 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 Write completed with error (sct=0, sc=8) 00:21:15.472 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 [2024-11-20 09:53:38.353687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 [2024-11-20 09:53:38.354745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.473 Write completed with error (sct=0, sc=8) 00:21:15.473 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 [2024-11-20 09:53:38.356136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 [2024-11-20 09:53:38.356165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 [2024-11-20 09:53:38.356174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 starting I/O failed: -6 00:21:15.474 [2024-11-20 09:53:38.356183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 [2024-11-20 09:53:38.356190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 [2024-11-20 09:53:38.356197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 starting I/O failed: -6 00:21:15.474 [2024-11-20 09:53:38.356204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 [2024-11-20 09:53:38.356211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 [2024-11-20 09:53:38.356217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 [2024-11-20 09:53:38.356223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3fe50 is same with the state(6) to be set 00:21:15.474 [2024-11-20 09:53:38.356350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.474 NVMe io qpair process completion error 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 [2024-11-20 09:53:38.359837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 [2024-11-20 09:53:38.360743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 [2024-11-20 09:53:38.361098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with tstarting I/O failed: -6 00:21:15.474 he state(6) to be set 00:21:15.474 [2024-11-20 09:53:38.361125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with the state(6) to be set 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 starting I/O failed: -6 00:21:15.474 [2024-11-20 09:53:38.361133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with the state(6) to be set 00:21:15.474 [2024-11-20 09:53:38.361142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with the state(6) to be set 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 [2024-11-20 09:53:38.361149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with the state(6) to be set 00:21:15.474 [2024-11-20 09:53:38.361156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with the state(6) to be set 00:21:15.474 Write completed with error (sct=0, sc=8) 00:21:15.474 [2024-11-20 09:53:38.361163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with the state(6) to be set 00:21:15.474 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.361169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with the state(6) to be set 00:21:15.475 [2024-11-20 09:53:38.361176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with the state(6) to be set 00:21:15.475 [2024-11-20 09:53:38.361183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7680 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.361501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7b70 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 [2024-11-20 09:53:38.361525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7b70 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 [2024-11-20 09:53:38.361533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7b70 is same with the state(6) to be set 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.361540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7b70 is same with the state(6) to be set 00:21:15.475 [2024-11-20 09:53:38.361547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7b70 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 [2024-11-20 09:53:38.361553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca7b70 is same with tstarting I/O failed: -6 00:21:15.475 he state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.361747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.361969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8060 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.361996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8060 is same with the state(6) to be set 00:21:15.475 [2024-11-20 09:53:38.362004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8060 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 [2024-11-20 09:53:38.362011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8060 is same with the state(6) to be set 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.362018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8060 is same with the state(6) to be set 00:21:15.475 [2024-11-20 09:53:38.362024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8060 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 [2024-11-20 09:53:38.362031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8060 is same with the state(6) to be set 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.362037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca8060 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.362296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca71b0 is same with tWrite completed with error (sct=0, sc=8) 00:21:15.475 he state(6) to be set 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.362318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca71b0 is same with the state(6) to be set 00:21:15.475 [2024-11-20 09:53:38.362326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca71b0 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 [2024-11-20 09:53:38.362333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca71b0 is same with the state(6) to be set 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.362341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca71b0 is same with the state(6) to be set 00:21:15.475 [2024-11-20 09:53:38.362348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca71b0 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 [2024-11-20 09:53:38.362355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca71b0 is same with the state(6) to be set 00:21:15.475 starting I/O failed: -6 00:21:15.475 [2024-11-20 09:53:38.362362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca71b0 is same with the state(6) to be set 00:21:15.475 [2024-11-20 09:53:38.362369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca71b0 is same with the state(6) to be set 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.475 Write completed with error (sct=0, sc=8) 00:21:15.475 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 [2024-11-20 09:53:38.363337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.476 NVMe io qpair process completion error 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 [2024-11-20 09:53:38.364321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 [2024-11-20 09:53:38.365234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 Write completed with error (sct=0, sc=8) 00:21:15.476 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 [2024-11-20 09:53:38.366242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 [2024-11-20 09:53:38.367881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.477 NVMe io qpair process completion error 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 Write completed with error (sct=0, sc=8) 00:21:15.477 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 [2024-11-20 09:53:38.368943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 [2024-11-20 09:53:38.369841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 [2024-11-20 09:53:38.370844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.478 Write completed with error (sct=0, sc=8) 00:21:15.478 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 [2024-11-20 09:53:38.372896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.479 NVMe io qpair process completion error 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 [2024-11-20 09:53:38.373916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 [2024-11-20 09:53:38.374850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.479 starting I/O failed: -6 00:21:15.479 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 [2024-11-20 09:53:38.375858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 [2024-11-20 09:53:38.383730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.480 NVMe io qpair process completion error 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 starting I/O failed: -6 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.480 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 [2024-11-20 09:53:38.384737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 [2024-11-20 09:53:38.385554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 [2024-11-20 09:53:38.386585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.481 Write completed with error (sct=0, sc=8) 00:21:15.481 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 [2024-11-20 09:53:38.388487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.482 NVMe io qpair process completion error 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 [2024-11-20 09:53:38.389535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 starting I/O failed: -6 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.482 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 [2024-11-20 09:53:38.390456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 [2024-11-20 09:53:38.391484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.483 starting I/O failed: -6 00:21:15.483 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 [2024-11-20 09:53:38.393301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.484 NVMe io qpair process completion error 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 [2024-11-20 09:53:38.394322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 [2024-11-20 09:53:38.395253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.484 Write completed with error (sct=0, sc=8) 00:21:15.484 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 [2024-11-20 09:53:38.396295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 [2024-11-20 09:53:38.404916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.485 NVMe io qpair process completion error 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.485 starting I/O failed: -6 00:21:15.485 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 starting I/O failed: -6 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.486 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 [2024-11-20 09:53:38.409644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 [2024-11-20 09:53:38.410584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 Write completed with error (sct=0, sc=8) 00:21:15.487 starting I/O failed: -6 00:21:15.488 [2024-11-20 09:53:38.411581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 Write completed with error (sct=0, sc=8) 00:21:15.488 starting I/O failed: -6 00:21:15.488 [2024-11-20 09:53:38.414126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:15.488 NVMe io qpair process completion error 00:21:15.488 Initializing NVMe Controllers 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:15.488 Controller IO queue size 128, less than required. 00:21:15.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:15.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:15.488 Initialization complete. Launching workers. 00:21:15.488 ======================================================== 00:21:15.488 Latency(us) 00:21:15.488 Device Information : IOPS MiB/s Average min max 00:21:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2113.67 90.82 60553.51 1199.95 109113.70 00:21:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2135.48 91.76 59217.75 654.67 107464.57 00:21:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2162.65 92.93 59146.08 961.16 123468.97 00:21:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2154.31 92.57 58695.07 956.94 103231.57 00:21:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2144.47 92.15 58974.54 865.56 102564.38 00:21:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2145.32 92.18 58962.39 735.93 101429.03 00:21:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2174.20 93.42 58195.30 681.32 102196.56 00:21:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2164.79 93.02 58535.15 860.25 98369.46 00:21:15.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2152.60 92.49 58881.20 766.64 113169.81 00:21:15.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2119.23 91.06 59824.10 722.45 115537.72 00:21:15.489 ======================================================== 00:21:15.489 Total : 21466.72 922.40 59093.50 654.67 123468.97 00:21:15.489 00:21:15.489 [2024-11-20 09:53:38.420691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dfae0 is same with the state(6) to be set 00:21:15.489 [2024-11-20 09:53:38.420738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16df720 is same with the state(6) to be set 00:21:15.489 [2024-11-20 09:53:38.420769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16de740 is same with the state(6) to be set 00:21:15.489 [2024-11-20 09:53:38.420797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd560 is same with the state(6) to be set 00:21:15.489 [2024-11-20 09:53:38.420825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd890 is same with the state(6) to be set 00:21:15.489 [2024-11-20 09:53:38.420853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16de410 is same with the state(6) to be set 00:21:15.489 [2024-11-20 09:53:38.420881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16df900 is same with the state(6) to be set 00:21:15.489 [2024-11-20 09:53:38.420909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddef0 is same with the state(6) to be set 00:21:15.489 [2024-11-20 09:53:38.420937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ddbc0 is same with the state(6) to be set 00:21:15.489 [2024-11-20 09:53:38.420974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dea70 is same with the state(6) to be set 00:21:15.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:15.489 09:53:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:16.421 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2973416 00:21:16.421 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:16.421 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2973416 00:21:16.421 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:16.421 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.421 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:16.421 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2973416 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.422 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.680 rmmod nvme_tcp 00:21:16.680 rmmod nvme_fabrics 00:21:16.680 rmmod nvme_keyring 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2973133 ']' 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2973133 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2973133 ']' 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2973133 00:21:16.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2973133) - No such process 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2973133 is not found' 00:21:16.680 Process with pid 2973133 is not found 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.680 09:53:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.587 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:18.587 00:21:18.587 real 0m10.427s 00:21:18.587 user 0m27.608s 00:21:18.587 sys 0m5.195s 00:21:18.587 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.587 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:18.587 ************************************ 00:21:18.587 END TEST nvmf_shutdown_tc4 00:21:18.587 ************************************ 00:21:18.845 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:18.845 00:21:18.845 real 0m42.594s 00:21:18.845 user 1m48.094s 00:21:18.845 sys 0m14.127s 00:21:18.845 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.845 09:53:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:18.845 ************************************ 00:21:18.845 END TEST nvmf_shutdown 00:21:18.845 ************************************ 00:21:18.845 09:53:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:18.845 09:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:18.845 09:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.845 09:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:18.845 ************************************ 00:21:18.845 START TEST nvmf_nsid 00:21:18.845 ************************************ 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:18.845 * Looking for test storage... 00:21:18.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1703 -- # lcov --version 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.845 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:21:19.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.102 --rc genhtml_branch_coverage=1 00:21:19.102 --rc genhtml_function_coverage=1 00:21:19.102 --rc genhtml_legend=1 00:21:19.102 --rc geninfo_all_blocks=1 00:21:19.102 --rc geninfo_unexecuted_blocks=1 00:21:19.102 00:21:19.102 ' 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:21:19.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.102 --rc genhtml_branch_coverage=1 00:21:19.102 --rc genhtml_function_coverage=1 00:21:19.102 --rc genhtml_legend=1 00:21:19.102 --rc geninfo_all_blocks=1 00:21:19.102 --rc geninfo_unexecuted_blocks=1 00:21:19.102 00:21:19.102 ' 00:21:19.102 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:21:19.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.102 --rc genhtml_branch_coverage=1 00:21:19.102 --rc genhtml_function_coverage=1 00:21:19.102 --rc genhtml_legend=1 00:21:19.102 --rc geninfo_all_blocks=1 00:21:19.103 --rc geninfo_unexecuted_blocks=1 00:21:19.103 00:21:19.103 ' 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:21:19.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.103 --rc genhtml_branch_coverage=1 00:21:19.103 --rc genhtml_function_coverage=1 00:21:19.103 --rc genhtml_legend=1 00:21:19.103 --rc geninfo_all_blocks=1 00:21:19.103 --rc geninfo_unexecuted_blocks=1 00:21:19.103 00:21:19.103 ' 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.103 09:53:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:25.680 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:25.681 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:25.681 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:25.681 Found net devices under 0000:86:00.0: cvl_0_0 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:25.681 Found net devices under 0000:86:00.1: cvl_0_1 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:25.681 09:53:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.681 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:25.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:21:25.682 00:21:25.682 --- 10.0.0.2 ping statistics --- 00:21:25.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.682 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:25.682 00:21:25.682 --- 10.0.0.1 ping statistics --- 00:21:25.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.682 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2977877 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2977877 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2977877 ']' 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.682 [2024-11-20 09:53:48.162328] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:21:25.682 [2024-11-20 09:53:48.162371] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.682 [2024-11-20 09:53:48.240327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.682 [2024-11-20 09:53:48.282376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.682 [2024-11-20 09:53:48.282412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.682 [2024-11-20 09:53:48.282420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.682 [2024-11-20 09:53:48.282427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.682 [2024-11-20 09:53:48.282433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.682 [2024-11-20 09:53:48.282997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2977934 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:25.682 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=2b134345-0c73-4fc7-af92-1471306f6937 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=c021c391-15d6-4ccf-b5b2-324a4f543cb9 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=80a1cf42-dc11-4e8d-9d13-e9ef5670c78c 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.683 null0 00:21:25.683 null1 00:21:25.683 [2024-11-20 09:53:48.478578] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:21:25.683 [2024-11-20 09:53:48.478620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977934 ] 00:21:25.683 null2 00:21:25.683 [2024-11-20 09:53:48.483793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.683 [2024-11-20 09:53:48.508041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.683 [2024-11-20 09:53:48.538497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2977934 /var/tmp/tgt2.sock 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2977934 ']' 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:25.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:25.683 [2024-11-20 09:53:48.586855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:25.683 09:53:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:25.942 [2024-11-20 09:53:49.120825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.942 [2024-11-20 09:53:49.136928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:25.942 nvme0n1 nvme0n2 00:21:25.942 nvme1n1 00:21:25.942 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:25.942 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:25.942 09:53:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:27.320 09:53:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 2b134345-0c73-4fc7-af92-1471306f6937 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2b1343450c734fc7af921471306f6937 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2B1343450C734FC7AF921471306F6937 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 2B1343450C734FC7AF921471306F6937 == \2\B\1\3\4\3\4\5\0\C\7\3\4\F\C\7\A\F\9\2\1\4\7\1\3\0\6\F\6\9\3\7 ]] 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid c021c391-15d6-4ccf-b5b2-324a4f543cb9 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c021c39115d64ccfb5b2324a4f543cb9 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C021C39115D64CCFB5B2324A4F543CB9 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ C021C39115D64CCFB5B2324A4F543CB9 == \C\0\2\1\C\3\9\1\1\5\D\6\4\C\C\F\B\5\B\2\3\2\4\A\4\F\5\4\3\C\B\9 ]] 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 80a1cf42-dc11-4e8d-9d13-e9ef5670c78c 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=80a1cf42dc114e8d9d13e9ef5670c78c 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 80A1CF42DC114E8D9D13E9EF5670C78C 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 80A1CF42DC114E8D9D13E9EF5670C78C == \8\0\A\1\C\F\4\2\D\C\1\1\4\E\8\D\9\D\1\3\E\9\E\F\5\6\7\0\C\7\8\C ]] 00:21:28.256 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2977934 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2977934 ']' 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2977934 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977934 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977934' 00:21:28.516 killing process with pid 2977934 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2977934 00:21:28.516 09:53:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2977934 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:28.775 rmmod nvme_tcp 00:21:28.775 rmmod nvme_fabrics 00:21:28.775 rmmod nvme_keyring 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2977877 ']' 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2977877 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2977877 ']' 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2977877 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.775 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977877 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977877' 00:21:29.035 killing process with pid 2977877 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2977877 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2977877 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.035 09:53:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.573 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:31.573 00:21:31.573 real 0m12.350s 00:21:31.573 user 0m9.642s 00:21:31.573 sys 0m5.508s 00:21:31.573 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.573 09:53:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:31.573 ************************************ 00:21:31.573 END TEST nvmf_nsid 00:21:31.573 ************************************ 00:21:31.573 09:53:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:31.573 00:21:31.573 real 12m6.992s 00:21:31.573 user 26m12.747s 00:21:31.573 sys 3m44.360s 00:21:31.573 09:53:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.573 09:53:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:31.573 ************************************ 00:21:31.573 END TEST nvmf_target_extra 00:21:31.573 ************************************ 00:21:31.573 09:53:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:31.573 09:53:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.573 09:53:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.573 09:53:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.573 ************************************ 00:21:31.573 START TEST nvmf_host 00:21:31.573 ************************************ 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:31.573 * Looking for test storage... 00:21:31.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1703 -- # lcov --version 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:21:31.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.573 --rc genhtml_branch_coverage=1 00:21:31.573 --rc genhtml_function_coverage=1 00:21:31.573 --rc genhtml_legend=1 00:21:31.573 --rc geninfo_all_blocks=1 00:21:31.573 --rc geninfo_unexecuted_blocks=1 00:21:31.573 00:21:31.573 ' 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:21:31.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.573 --rc genhtml_branch_coverage=1 00:21:31.573 --rc genhtml_function_coverage=1 00:21:31.573 --rc genhtml_legend=1 00:21:31.573 --rc geninfo_all_blocks=1 00:21:31.573 --rc geninfo_unexecuted_blocks=1 00:21:31.573 00:21:31.573 ' 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:21:31.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.573 --rc genhtml_branch_coverage=1 00:21:31.573 --rc genhtml_function_coverage=1 00:21:31.573 --rc genhtml_legend=1 00:21:31.573 --rc geninfo_all_blocks=1 00:21:31.573 --rc geninfo_unexecuted_blocks=1 00:21:31.573 00:21:31.573 ' 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:21:31.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.573 --rc genhtml_branch_coverage=1 00:21:31.573 --rc genhtml_function_coverage=1 00:21:31.573 --rc genhtml_legend=1 00:21:31.573 --rc geninfo_all_blocks=1 00:21:31.573 --rc geninfo_unexecuted_blocks=1 00:21:31.573 00:21:31.573 ' 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.573 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.574 ************************************ 00:21:31.574 START TEST nvmf_multicontroller 00:21:31.574 ************************************ 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:31.574 * Looking for test storage... 00:21:31.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # lcov --version 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:21:31.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.574 --rc genhtml_branch_coverage=1 00:21:31.574 --rc genhtml_function_coverage=1 00:21:31.574 --rc genhtml_legend=1 00:21:31.574 --rc geninfo_all_blocks=1 00:21:31.574 --rc geninfo_unexecuted_blocks=1 00:21:31.574 00:21:31.574 ' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:21:31.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.574 --rc genhtml_branch_coverage=1 00:21:31.574 --rc genhtml_function_coverage=1 00:21:31.574 --rc genhtml_legend=1 00:21:31.574 --rc geninfo_all_blocks=1 00:21:31.574 --rc geninfo_unexecuted_blocks=1 00:21:31.574 00:21:31.574 ' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:21:31.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.574 --rc genhtml_branch_coverage=1 00:21:31.574 --rc genhtml_function_coverage=1 00:21:31.574 --rc genhtml_legend=1 00:21:31.574 --rc geninfo_all_blocks=1 00:21:31.574 --rc geninfo_unexecuted_blocks=1 00:21:31.574 00:21:31.574 ' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:21:31.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.574 --rc genhtml_branch_coverage=1 00:21:31.574 --rc genhtml_function_coverage=1 00:21:31.574 --rc genhtml_legend=1 00:21:31.574 --rc geninfo_all_blocks=1 00:21:31.574 --rc geninfo_unexecuted_blocks=1 00:21:31.574 00:21:31.574 ' 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.574 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.834 09:53:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:38.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:38.406 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:38.406 Found net devices under 0000:86:00.0: cvl_0_0 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:38.406 Found net devices under 0000:86:00.1: cvl_0_1 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.406 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:38.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:21:38.407 00:21:38.407 --- 10.0.0.2 ping statistics --- 00:21:38.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.407 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:38.407 00:21:38.407 --- 10.0.0.1 ping statistics --- 00:21:38.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.407 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2982256 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2982256 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2982256 ']' 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.407 09:54:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.407 [2024-11-20 09:54:00.889331] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:21:38.407 [2024-11-20 09:54:00.889376] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.407 [2024-11-20 09:54:00.968621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:38.407 [2024-11-20 09:54:01.014110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.407 [2024-11-20 09:54:01.014146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.407 [2024-11-20 09:54:01.014153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.407 [2024-11-20 09:54:01.014160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.407 [2024-11-20 09:54:01.014167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.407 [2024-11-20 09:54:01.015481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.407 [2024-11-20 09:54:01.015587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.407 [2024-11-20 09:54:01.015588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.666 [2024-11-20 09:54:01.781900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.666 Malloc0 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.666 [2024-11-20 09:54:01.843186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:38.666 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.667 [2024-11-20 09:54:01.851091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.667 Malloc1 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2982560 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2982560 /var/tmp/bdevperf.sock 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2982560 ']' 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.667 09:54:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.926 NVMe0n1 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.926 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.185 1 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.185 request: 00:21:39.185 { 00:21:39.185 "name": "NVMe0", 00:21:39.185 "trtype": "tcp", 00:21:39.185 "traddr": "10.0.0.2", 00:21:39.185 "adrfam": "ipv4", 00:21:39.185 "trsvcid": "4420", 00:21:39.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.185 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:39.185 "hostaddr": "10.0.0.1", 00:21:39.185 "prchk_reftag": false, 00:21:39.185 "prchk_guard": false, 00:21:39.185 "hdgst": false, 00:21:39.185 "ddgst": false, 00:21:39.185 "allow_unrecognized_csi": false, 00:21:39.185 "method": "bdev_nvme_attach_controller", 00:21:39.185 "req_id": 1 00:21:39.185 } 00:21:39.185 Got JSON-RPC error response 00:21:39.185 response: 00:21:39.185 { 00:21:39.185 "code": -114, 00:21:39.185 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:39.185 } 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.185 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.185 request: 00:21:39.185 { 00:21:39.185 "name": "NVMe0", 00:21:39.185 "trtype": "tcp", 00:21:39.185 "traddr": "10.0.0.2", 00:21:39.185 "adrfam": "ipv4", 00:21:39.186 "trsvcid": "4420", 00:21:39.186 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:39.186 "hostaddr": "10.0.0.1", 00:21:39.186 "prchk_reftag": false, 00:21:39.186 "prchk_guard": false, 00:21:39.186 "hdgst": false, 00:21:39.186 "ddgst": false, 00:21:39.186 "allow_unrecognized_csi": false, 00:21:39.186 "method": "bdev_nvme_attach_controller", 00:21:39.186 "req_id": 1 00:21:39.186 } 00:21:39.186 Got JSON-RPC error response 00:21:39.186 response: 00:21:39.186 { 00:21:39.186 "code": -114, 00:21:39.186 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:39.186 } 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.186 request: 00:21:39.186 { 00:21:39.186 "name": "NVMe0", 00:21:39.186 "trtype": "tcp", 00:21:39.186 "traddr": "10.0.0.2", 00:21:39.186 "adrfam": "ipv4", 00:21:39.186 "trsvcid": "4420", 00:21:39.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.186 "hostaddr": "10.0.0.1", 00:21:39.186 "prchk_reftag": false, 00:21:39.186 "prchk_guard": false, 00:21:39.186 "hdgst": false, 00:21:39.186 "ddgst": false, 00:21:39.186 "multipath": "disable", 00:21:39.186 "allow_unrecognized_csi": false, 00:21:39.186 "method": "bdev_nvme_attach_controller", 00:21:39.186 "req_id": 1 00:21:39.186 } 00:21:39.186 Got JSON-RPC error response 00:21:39.186 response: 00:21:39.186 { 00:21:39.186 "code": -114, 00:21:39.186 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:39.186 } 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.186 request: 00:21:39.186 { 00:21:39.186 "name": "NVMe0", 00:21:39.186 "trtype": "tcp", 00:21:39.186 "traddr": "10.0.0.2", 00:21:39.186 "adrfam": "ipv4", 00:21:39.186 "trsvcid": "4420", 00:21:39.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.186 "hostaddr": "10.0.0.1", 00:21:39.186 "prchk_reftag": false, 00:21:39.186 "prchk_guard": false, 00:21:39.186 "hdgst": false, 00:21:39.186 "ddgst": false, 00:21:39.186 "multipath": "failover", 00:21:39.186 "allow_unrecognized_csi": false, 00:21:39.186 "method": "bdev_nvme_attach_controller", 00:21:39.186 "req_id": 1 00:21:39.186 } 00:21:39.186 Got JSON-RPC error response 00:21:39.186 response: 00:21:39.186 { 00:21:39.186 "code": -114, 00:21:39.186 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:39.186 } 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.186 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.445 NVMe0n1 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.445 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:39.445 09:54:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.819 { 00:21:40.819 "results": [ 00:21:40.819 { 00:21:40.819 "job": "NVMe0n1", 00:21:40.819 "core_mask": "0x1", 00:21:40.819 "workload": "write", 00:21:40.819 "status": "finished", 00:21:40.819 "queue_depth": 128, 00:21:40.819 "io_size": 4096, 00:21:40.819 "runtime": 1.005035, 00:21:40.819 "iops": 24525.51403682459, 00:21:40.819 "mibps": 95.80278920634605, 00:21:40.819 "io_failed": 0, 00:21:40.819 "io_timeout": 0, 00:21:40.819 "avg_latency_us": 5208.678981244499, 00:21:40.819 "min_latency_us": 3105.8365217391306, 00:21:40.819 "max_latency_us": 13677.078260869564 00:21:40.819 } 00:21:40.819 ], 00:21:40.819 "core_count": 1 00:21:40.819 } 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2982560 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2982560 ']' 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2982560 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2982560 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2982560' 00:21:40.819 killing process with pid 2982560 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2982560 00:21:40.819 09:54:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2982560 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:40.819 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:40.819 [2024-11-20 09:54:01.957211] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:21:40.819 [2024-11-20 09:54:01.957261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982560 ] 00:21:40.819 [2024-11-20 09:54:02.034309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.819 [2024-11-20 09:54:02.075733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.819 [2024-11-20 09:54:02.696590] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name b642ebca-6f1f-47d7-8bc9-58b40ae1ffcd already exists 00:21:40.819 [2024-11-20 09:54:02.696616] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:b642ebca-6f1f-47d7-8bc9-58b40ae1ffcd alias for bdev NVMe1n1 00:21:40.819 [2024-11-20 09:54:02.696623] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:40.819 Running I/O for 1 seconds... 00:21:40.819 24457.00 IOPS, 95.54 MiB/s 00:21:40.819 Latency(us) 00:21:40.819 [2024-11-20T08:54:04.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.819 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:40.819 NVMe0n1 : 1.01 24525.51 95.80 0.00 0.00 5208.68 3105.84 13677.08 00:21:40.819 [2024-11-20T08:54:04.151Z] =================================================================================================================== 00:21:40.819 [2024-11-20T08:54:04.151Z] Total : 24525.51 95.80 0.00 0.00 5208.68 3105.84 13677.08 00:21:40.819 Received shutdown signal, test time was about 1.000000 seconds 00:21:40.819 00:21:40.819 Latency(us) 00:21:40.819 [2024-11-20T08:54:04.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.819 [2024-11-20T08:54:04.151Z] =================================================================================================================== 00:21:40.819 [2024-11-20T08:54:04.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.819 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.819 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.820 rmmod nvme_tcp 00:21:40.820 rmmod nvme_fabrics 00:21:40.820 rmmod nvme_keyring 00:21:40.820 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.078 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2982256 ']' 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2982256 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2982256 ']' 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2982256 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2982256 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2982256' 00:21:41.079 killing process with pid 2982256 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2982256 00:21:41.079 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2982256 00:21:41.337 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.337 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.337 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.337 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:41.337 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:41.337 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.337 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.338 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.338 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.338 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.338 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.338 09:54:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.241 09:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.241 00:21:43.241 real 0m11.769s 00:21:43.241 user 0m14.393s 00:21:43.241 sys 0m5.142s 00:21:43.241 09:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.241 09:54:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 ************************************ 00:21:43.241 END TEST nvmf_multicontroller 00:21:43.241 ************************************ 00:21:43.241 09:54:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:43.241 09:54:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:43.241 09:54:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.241 09:54:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.241 ************************************ 00:21:43.241 START TEST nvmf_aer 00:21:43.241 ************************************ 00:21:43.241 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:43.501 * Looking for test storage... 00:21:43.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # lcov --version 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:21:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.501 --rc genhtml_branch_coverage=1 00:21:43.501 --rc genhtml_function_coverage=1 00:21:43.501 --rc genhtml_legend=1 00:21:43.501 --rc geninfo_all_blocks=1 00:21:43.501 --rc geninfo_unexecuted_blocks=1 00:21:43.501 00:21:43.501 ' 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:21:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.501 --rc genhtml_branch_coverage=1 00:21:43.501 --rc genhtml_function_coverage=1 00:21:43.501 --rc genhtml_legend=1 00:21:43.501 --rc geninfo_all_blocks=1 00:21:43.501 --rc geninfo_unexecuted_blocks=1 00:21:43.501 00:21:43.501 ' 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:21:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.501 --rc genhtml_branch_coverage=1 00:21:43.501 --rc genhtml_function_coverage=1 00:21:43.501 --rc genhtml_legend=1 00:21:43.501 --rc geninfo_all_blocks=1 00:21:43.501 --rc geninfo_unexecuted_blocks=1 00:21:43.501 00:21:43.501 ' 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:21:43.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.501 --rc genhtml_branch_coverage=1 00:21:43.501 --rc genhtml_function_coverage=1 00:21:43.501 --rc genhtml_legend=1 00:21:43.501 --rc geninfo_all_blocks=1 00:21:43.501 --rc geninfo_unexecuted_blocks=1 00:21:43.501 00:21:43.501 ' 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.501 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.502 09:54:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.211 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:50.212 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:50.212 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:50.212 Found net devices under 0000:86:00.0: cvl_0_0 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:50.212 Found net devices under 0000:86:00.1: cvl_0_1 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:50.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:21:50.212 00:21:50.212 --- 10.0.0.2 ping statistics --- 00:21:50.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.212 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:21:50.212 00:21:50.212 --- 10.0.0.1 ping statistics --- 00:21:50.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.212 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2986754 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2986754 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2986754 ']' 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.212 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.212 [2024-11-20 09:54:12.737119] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:21:50.212 [2024-11-20 09:54:12.737170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.212 [2024-11-20 09:54:12.816625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.212 [2024-11-20 09:54:12.860845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.212 [2024-11-20 09:54:12.860885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.212 [2024-11-20 09:54:12.860892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.213 [2024-11-20 09:54:12.860898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.213 [2024-11-20 09:54:12.860903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.213 [2024-11-20 09:54:12.862521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.213 [2024-11-20 09:54:12.862641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.213 [2024-11-20 09:54:12.862749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.213 [2024-11-20 09:54:12.862751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.213 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.213 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:50.213 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.213 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.213 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.213 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.213 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 [2024-11-20 09:54:13.004345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 Malloc0 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 [2024-11-20 09:54:13.065575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 [ 00:21:50.213 { 00:21:50.213 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:50.213 "subtype": "Discovery", 00:21:50.213 "listen_addresses": [], 00:21:50.213 "allow_any_host": true, 00:21:50.213 "hosts": [] 00:21:50.213 }, 00:21:50.213 { 00:21:50.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.213 "subtype": "NVMe", 00:21:50.213 "listen_addresses": [ 00:21:50.213 { 00:21:50.213 "trtype": "TCP", 00:21:50.213 "adrfam": "IPv4", 00:21:50.213 "traddr": "10.0.0.2", 00:21:50.213 "trsvcid": "4420" 00:21:50.213 } 00:21:50.213 ], 00:21:50.213 "allow_any_host": true, 00:21:50.213 "hosts": [], 00:21:50.213 "serial_number": "SPDK00000000000001", 00:21:50.213 "model_number": "SPDK bdev Controller", 00:21:50.213 "max_namespaces": 2, 00:21:50.213 "min_cntlid": 1, 00:21:50.213 "max_cntlid": 65519, 00:21:50.213 "namespaces": [ 00:21:50.213 { 00:21:50.213 "nsid": 1, 00:21:50.213 "bdev_name": "Malloc0", 00:21:50.213 "name": "Malloc0", 00:21:50.213 "nguid": "40AC061E39F94C8D85BB5DF3CA4F334C", 00:21:50.213 "uuid": "40ac061e-39f9-4c8d-85bb-5df3ca4f334c" 00:21:50.213 } 00:21:50.213 ] 00:21:50.213 } 00:21:50.213 ] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2986910 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 Malloc1 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 Asynchronous Event Request test 00:21:50.213 Attaching to 10.0.0.2 00:21:50.213 Attached to 10.0.0.2 00:21:50.213 Registering asynchronous event callbacks... 00:21:50.213 Starting namespace attribute notice tests for all controllers... 00:21:50.213 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:50.213 aer_cb - Changed Namespace 00:21:50.213 Cleaning up... 00:21:50.213 [ 00:21:50.213 { 00:21:50.213 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:50.213 "subtype": "Discovery", 00:21:50.213 "listen_addresses": [], 00:21:50.213 "allow_any_host": true, 00:21:50.213 "hosts": [] 00:21:50.213 }, 00:21:50.213 { 00:21:50.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.213 "subtype": "NVMe", 00:21:50.213 "listen_addresses": [ 00:21:50.213 { 00:21:50.213 "trtype": "TCP", 00:21:50.213 "adrfam": "IPv4", 00:21:50.213 "traddr": "10.0.0.2", 00:21:50.213 "trsvcid": "4420" 00:21:50.213 } 00:21:50.213 ], 00:21:50.213 "allow_any_host": true, 00:21:50.213 "hosts": [], 00:21:50.213 "serial_number": "SPDK00000000000001", 00:21:50.213 "model_number": "SPDK bdev Controller", 00:21:50.213 "max_namespaces": 2, 00:21:50.213 "min_cntlid": 1, 00:21:50.213 "max_cntlid": 65519, 00:21:50.213 "namespaces": [ 00:21:50.213 { 00:21:50.213 "nsid": 1, 00:21:50.213 "bdev_name": "Malloc0", 00:21:50.213 "name": "Malloc0", 00:21:50.213 "nguid": "40AC061E39F94C8D85BB5DF3CA4F334C", 00:21:50.213 "uuid": "40ac061e-39f9-4c8d-85bb-5df3ca4f334c" 00:21:50.213 }, 00:21:50.213 { 00:21:50.213 "nsid": 2, 00:21:50.213 "bdev_name": "Malloc1", 00:21:50.213 "name": "Malloc1", 00:21:50.213 "nguid": "58CA9461294D498C9871AE05CEA73C98", 00:21:50.213 "uuid": "58ca9461-294d-498c-9871-ae05cea73c98" 00:21:50.213 } 00:21:50.213 ] 00:21:50.213 } 00:21:50.213 ] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2986910 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.213 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.214 rmmod nvme_tcp 00:21:50.214 rmmod nvme_fabrics 00:21:50.214 rmmod nvme_keyring 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2986754 ']' 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2986754 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2986754 ']' 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2986754 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2986754 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2986754' 00:21:50.214 killing process with pid 2986754 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2986754 00:21:50.214 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2986754 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.473 09:54:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:53.011 00:21:53.011 real 0m9.215s 00:21:53.011 user 0m5.081s 00:21:53.011 sys 0m4.846s 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:53.011 ************************************ 00:21:53.011 END TEST nvmf_aer 00:21:53.011 ************************************ 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.011 ************************************ 00:21:53.011 START TEST nvmf_async_init 00:21:53.011 ************************************ 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:53.011 * Looking for test storage... 00:21:53.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # lcov --version 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.011 09:54:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:21:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.011 --rc genhtml_branch_coverage=1 00:21:53.011 --rc genhtml_function_coverage=1 00:21:53.011 --rc genhtml_legend=1 00:21:53.011 --rc geninfo_all_blocks=1 00:21:53.011 --rc geninfo_unexecuted_blocks=1 00:21:53.011 00:21:53.011 ' 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:21:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.011 --rc genhtml_branch_coverage=1 00:21:53.011 --rc genhtml_function_coverage=1 00:21:53.011 --rc genhtml_legend=1 00:21:53.011 --rc geninfo_all_blocks=1 00:21:53.011 --rc geninfo_unexecuted_blocks=1 00:21:53.011 00:21:53.011 ' 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:21:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.011 --rc genhtml_branch_coverage=1 00:21:53.011 --rc genhtml_function_coverage=1 00:21:53.011 --rc genhtml_legend=1 00:21:53.011 --rc geninfo_all_blocks=1 00:21:53.011 --rc geninfo_unexecuted_blocks=1 00:21:53.011 00:21:53.011 ' 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:21:53.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.011 --rc genhtml_branch_coverage=1 00:21:53.011 --rc genhtml_function_coverage=1 00:21:53.011 --rc genhtml_legend=1 00:21:53.011 --rc geninfo_all_blocks=1 00:21:53.011 --rc geninfo_unexecuted_blocks=1 00:21:53.011 00:21:53.011 ' 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.011 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=140be88a9d2b4a28914bb3700625fbfb 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:53.012 09:54:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.584 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.584 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:59.584 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:59.584 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:59.584 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:59.584 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:59.584 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:59.585 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:59.585 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:59.585 Found net devices under 0000:86:00.0: cvl_0_0 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:59.585 Found net devices under 0000:86:00.1: cvl_0_1 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:59.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:21:59.585 00:21:59.585 --- 10.0.0.2 ping statistics --- 00:21:59.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.585 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:21:59.585 00:21:59.585 --- 10.0.0.1 ping statistics --- 00:21:59.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.585 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.585 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2990520 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2990520 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2990520 ']' 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.586 09:54:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 [2024-11-20 09:54:22.017630] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:21:59.586 [2024-11-20 09:54:22.017674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.586 [2024-11-20 09:54:22.097187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.586 [2024-11-20 09:54:22.138853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.586 [2024-11-20 09:54:22.138891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.586 [2024-11-20 09:54:22.138899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.586 [2024-11-20 09:54:22.138906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.586 [2024-11-20 09:54:22.138911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.586 [2024-11-20 09:54:22.139484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 [2024-11-20 09:54:22.274885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 null0 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 140be88a9d2b4a28914bb3700625fbfb 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 [2024-11-20 09:54:22.331176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 nvme0n1 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 [ 00:21:59.586 { 00:21:59.586 "name": "nvme0n1", 00:21:59.586 "aliases": [ 00:21:59.586 "140be88a-9d2b-4a28-914b-b3700625fbfb" 00:21:59.586 ], 00:21:59.586 "product_name": "NVMe disk", 00:21:59.586 "block_size": 512, 00:21:59.586 "num_blocks": 2097152, 00:21:59.586 "uuid": "140be88a-9d2b-4a28-914b-b3700625fbfb", 00:21:59.586 "numa_id": 1, 00:21:59.586 "assigned_rate_limits": { 00:21:59.586 "rw_ios_per_sec": 0, 00:21:59.586 "rw_mbytes_per_sec": 0, 00:21:59.586 "r_mbytes_per_sec": 0, 00:21:59.586 "w_mbytes_per_sec": 0 00:21:59.586 }, 00:21:59.586 "claimed": false, 00:21:59.586 "zoned": false, 00:21:59.586 "supported_io_types": { 00:21:59.586 "read": true, 00:21:59.586 "write": true, 00:21:59.586 "unmap": false, 00:21:59.586 "flush": true, 00:21:59.586 "reset": true, 00:21:59.586 "nvme_admin": true, 00:21:59.586 "nvme_io": true, 00:21:59.586 "nvme_io_md": false, 00:21:59.586 "write_zeroes": true, 00:21:59.586 "zcopy": false, 00:21:59.586 "get_zone_info": false, 00:21:59.586 "zone_management": false, 00:21:59.586 "zone_append": false, 00:21:59.586 "compare": true, 00:21:59.586 "compare_and_write": true, 00:21:59.586 "abort": true, 00:21:59.586 "seek_hole": false, 00:21:59.586 "seek_data": false, 00:21:59.586 "copy": true, 00:21:59.586 "nvme_iov_md": false 00:21:59.586 }, 00:21:59.586 "memory_domains": [ 00:21:59.586 { 00:21:59.586 "dma_device_id": "system", 00:21:59.586 "dma_device_type": 1 00:21:59.586 } 00:21:59.586 ], 00:21:59.586 "driver_specific": { 00:21:59.586 "nvme": [ 00:21:59.586 { 00:21:59.586 "trid": { 00:21:59.586 "trtype": "TCP", 00:21:59.586 "adrfam": "IPv4", 00:21:59.586 "traddr": "10.0.0.2", 00:21:59.586 "trsvcid": "4420", 00:21:59.586 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:59.586 }, 00:21:59.586 "ctrlr_data": { 00:21:59.586 "cntlid": 1, 00:21:59.586 "vendor_id": "0x8086", 00:21:59.586 "model_number": "SPDK bdev Controller", 00:21:59.586 "serial_number": "00000000000000000000", 00:21:59.586 "firmware_revision": "25.01", 00:21:59.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.586 "oacs": { 00:21:59.586 "security": 0, 00:21:59.586 "format": 0, 00:21:59.586 "firmware": 0, 00:21:59.586 "ns_manage": 0 00:21:59.586 }, 00:21:59.586 "multi_ctrlr": true, 00:21:59.586 "ana_reporting": false 00:21:59.586 }, 00:21:59.586 "vs": { 00:21:59.586 "nvme_version": "1.3" 00:21:59.586 }, 00:21:59.586 "ns_data": { 00:21:59.586 "id": 1, 00:21:59.586 "can_share": true 00:21:59.586 } 00:21:59.586 } 00:21:59.586 ], 00:21:59.586 "mp_policy": "active_passive" 00:21:59.586 } 00:21:59.586 } 00:21:59.586 ] 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.586 [2024-11-20 09:54:22.591742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:59.586 [2024-11-20 09:54:22.591796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c29220 (9): Bad file descriptor 00:21:59.586 [2024-11-20 09:54:22.724025] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:59.586 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 [ 00:21:59.587 { 00:21:59.587 "name": "nvme0n1", 00:21:59.587 "aliases": [ 00:21:59.587 "140be88a-9d2b-4a28-914b-b3700625fbfb" 00:21:59.587 ], 00:21:59.587 "product_name": "NVMe disk", 00:21:59.587 "block_size": 512, 00:21:59.587 "num_blocks": 2097152, 00:21:59.587 "uuid": "140be88a-9d2b-4a28-914b-b3700625fbfb", 00:21:59.587 "numa_id": 1, 00:21:59.587 "assigned_rate_limits": { 00:21:59.587 "rw_ios_per_sec": 0, 00:21:59.587 "rw_mbytes_per_sec": 0, 00:21:59.587 "r_mbytes_per_sec": 0, 00:21:59.587 "w_mbytes_per_sec": 0 00:21:59.587 }, 00:21:59.587 "claimed": false, 00:21:59.587 "zoned": false, 00:21:59.587 "supported_io_types": { 00:21:59.587 "read": true, 00:21:59.587 "write": true, 00:21:59.587 "unmap": false, 00:21:59.587 "flush": true, 00:21:59.587 "reset": true, 00:21:59.587 "nvme_admin": true, 00:21:59.587 "nvme_io": true, 00:21:59.587 "nvme_io_md": false, 00:21:59.587 "write_zeroes": true, 00:21:59.587 "zcopy": false, 00:21:59.587 "get_zone_info": false, 00:21:59.587 "zone_management": false, 00:21:59.587 "zone_append": false, 00:21:59.587 "compare": true, 00:21:59.587 "compare_and_write": true, 00:21:59.587 "abort": true, 00:21:59.587 "seek_hole": false, 00:21:59.587 "seek_data": false, 00:21:59.587 "copy": true, 00:21:59.587 "nvme_iov_md": false 00:21:59.587 }, 00:21:59.587 "memory_domains": [ 00:21:59.587 { 00:21:59.587 "dma_device_id": "system", 00:21:59.587 "dma_device_type": 1 00:21:59.587 } 00:21:59.587 ], 00:21:59.587 "driver_specific": { 00:21:59.587 "nvme": [ 00:21:59.587 { 00:21:59.587 "trid": { 00:21:59.587 "trtype": "TCP", 00:21:59.587 "adrfam": "IPv4", 00:21:59.587 "traddr": "10.0.0.2", 00:21:59.587 "trsvcid": "4420", 00:21:59.587 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:59.587 }, 00:21:59.587 "ctrlr_data": { 00:21:59.587 "cntlid": 2, 00:21:59.587 "vendor_id": "0x8086", 00:21:59.587 "model_number": "SPDK bdev Controller", 00:21:59.587 "serial_number": "00000000000000000000", 00:21:59.587 "firmware_revision": "25.01", 00:21:59.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.587 "oacs": { 00:21:59.587 "security": 0, 00:21:59.587 "format": 0, 00:21:59.587 "firmware": 0, 00:21:59.587 "ns_manage": 0 00:21:59.587 }, 00:21:59.587 "multi_ctrlr": true, 00:21:59.587 "ana_reporting": false 00:21:59.587 }, 00:21:59.587 "vs": { 00:21:59.587 "nvme_version": "1.3" 00:21:59.587 }, 00:21:59.587 "ns_data": { 00:21:59.587 "id": 1, 00:21:59.587 "can_share": true 00:21:59.587 } 00:21:59.587 } 00:21:59.587 ], 00:21:59.587 "mp_policy": "active_passive" 00:21:59.587 } 00:21:59.587 } 00:21:59.587 ] 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.DPAeW7NqZB 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.DPAeW7NqZB 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.DPAeW7NqZB 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 [2024-11-20 09:54:22.796364] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:59.587 [2024-11-20 09:54:22.796461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 [2024-11-20 09:54:22.816428] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.587 nvme0n1 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.587 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.587 [ 00:21:59.587 { 00:21:59.587 "name": "nvme0n1", 00:21:59.587 "aliases": [ 00:21:59.587 "140be88a-9d2b-4a28-914b-b3700625fbfb" 00:21:59.587 ], 00:21:59.587 "product_name": "NVMe disk", 00:21:59.587 "block_size": 512, 00:21:59.587 "num_blocks": 2097152, 00:21:59.587 "uuid": "140be88a-9d2b-4a28-914b-b3700625fbfb", 00:21:59.587 "numa_id": 1, 00:21:59.587 "assigned_rate_limits": { 00:21:59.587 "rw_ios_per_sec": 0, 00:21:59.587 "rw_mbytes_per_sec": 0, 00:21:59.587 "r_mbytes_per_sec": 0, 00:21:59.587 "w_mbytes_per_sec": 0 00:21:59.587 }, 00:21:59.587 "claimed": false, 00:21:59.587 "zoned": false, 00:21:59.587 "supported_io_types": { 00:21:59.587 "read": true, 00:21:59.587 "write": true, 00:21:59.587 "unmap": false, 00:21:59.587 "flush": true, 00:21:59.587 "reset": true, 00:21:59.587 "nvme_admin": true, 00:21:59.587 "nvme_io": true, 00:21:59.587 "nvme_io_md": false, 00:21:59.587 "write_zeroes": true, 00:21:59.587 "zcopy": false, 00:21:59.587 "get_zone_info": false, 00:21:59.587 "zone_management": false, 00:21:59.587 "zone_append": false, 00:21:59.587 "compare": true, 00:21:59.587 "compare_and_write": true, 00:21:59.587 "abort": true, 00:21:59.587 "seek_hole": false, 00:21:59.587 "seek_data": false, 00:21:59.587 "copy": true, 00:21:59.587 "nvme_iov_md": false 00:21:59.587 }, 00:21:59.587 "memory_domains": [ 00:21:59.587 { 00:21:59.587 "dma_device_id": "system", 00:21:59.587 "dma_device_type": 1 00:21:59.587 } 00:21:59.587 ], 00:21:59.587 "driver_specific": { 00:21:59.587 "nvme": [ 00:21:59.587 { 00:21:59.587 "trid": { 00:21:59.587 "trtype": "TCP", 00:21:59.587 "adrfam": "IPv4", 00:21:59.587 "traddr": "10.0.0.2", 00:21:59.587 "trsvcid": "4421", 00:21:59.587 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:59.588 }, 00:21:59.588 "ctrlr_data": { 00:21:59.588 "cntlid": 3, 00:21:59.588 "vendor_id": "0x8086", 00:21:59.588 "model_number": "SPDK bdev Controller", 00:21:59.588 "serial_number": "00000000000000000000", 00:21:59.588 "firmware_revision": "25.01", 00:21:59.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:59.588 "oacs": { 00:21:59.588 "security": 0, 00:21:59.588 "format": 0, 00:21:59.588 "firmware": 0, 00:21:59.588 "ns_manage": 0 00:21:59.588 }, 00:21:59.588 "multi_ctrlr": true, 00:21:59.588 "ana_reporting": false 00:21:59.588 }, 00:21:59.588 "vs": { 00:21:59.588 "nvme_version": "1.3" 00:21:59.588 }, 00:21:59.588 "ns_data": { 00:21:59.588 "id": 1, 00:21:59.588 "can_share": true 00:21:59.588 } 00:21:59.588 } 00:21:59.588 ], 00:21:59.588 "mp_policy": "active_passive" 00:21:59.588 } 00:21:59.588 } 00:21:59.588 ] 00:21:59.588 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.588 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.588 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.588 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.DPAeW7NqZB 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.847 rmmod nvme_tcp 00:21:59.847 rmmod nvme_fabrics 00:21:59.847 rmmod nvme_keyring 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2990520 ']' 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2990520 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2990520 ']' 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2990520 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.847 09:54:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2990520 00:21:59.847 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.847 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.847 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2990520' 00:21:59.847 killing process with pid 2990520 00:21:59.847 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2990520 00:21:59.847 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2990520 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.106 09:54:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.011 09:54:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.011 00:22:02.011 real 0m9.432s 00:22:02.011 user 0m3.081s 00:22:02.011 sys 0m4.776s 00:22:02.011 09:54:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.011 09:54:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.011 ************************************ 00:22:02.011 END TEST nvmf_async_init 00:22:02.011 ************************************ 00:22:02.011 09:54:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:02.011 09:54:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.011 09:54:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.011 09:54:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.011 ************************************ 00:22:02.011 START TEST dma 00:22:02.011 ************************************ 00:22:02.011 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:02.270 * Looking for test storage... 00:22:02.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1703 -- # lcov --version 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:22:02.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.270 --rc genhtml_branch_coverage=1 00:22:02.270 --rc genhtml_function_coverage=1 00:22:02.270 --rc genhtml_legend=1 00:22:02.270 --rc geninfo_all_blocks=1 00:22:02.270 --rc geninfo_unexecuted_blocks=1 00:22:02.270 00:22:02.270 ' 00:22:02.270 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:22:02.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.270 --rc genhtml_branch_coverage=1 00:22:02.270 --rc genhtml_function_coverage=1 00:22:02.270 --rc genhtml_legend=1 00:22:02.271 --rc geninfo_all_blocks=1 00:22:02.271 --rc geninfo_unexecuted_blocks=1 00:22:02.271 00:22:02.271 ' 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:22:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.271 --rc genhtml_branch_coverage=1 00:22:02.271 --rc genhtml_function_coverage=1 00:22:02.271 --rc genhtml_legend=1 00:22:02.271 --rc geninfo_all_blocks=1 00:22:02.271 --rc geninfo_unexecuted_blocks=1 00:22:02.271 00:22:02.271 ' 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:22:02.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.271 --rc genhtml_branch_coverage=1 00:22:02.271 --rc genhtml_function_coverage=1 00:22:02.271 --rc genhtml_legend=1 00:22:02.271 --rc geninfo_all_blocks=1 00:22:02.271 --rc geninfo_unexecuted_blocks=1 00:22:02.271 00:22:02.271 ' 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:02.271 00:22:02.271 real 0m0.203s 00:22:02.271 user 0m0.135s 00:22:02.271 sys 0m0.082s 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:02.271 ************************************ 00:22:02.271 END TEST dma 00:22:02.271 ************************************ 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.271 09:54:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.531 ************************************ 00:22:02.531 START TEST nvmf_identify 00:22:02.531 ************************************ 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:02.531 * Looking for test storage... 00:22:02.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # lcov --version 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:22:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.531 --rc genhtml_branch_coverage=1 00:22:02.531 --rc genhtml_function_coverage=1 00:22:02.531 --rc genhtml_legend=1 00:22:02.531 --rc geninfo_all_blocks=1 00:22:02.531 --rc geninfo_unexecuted_blocks=1 00:22:02.531 00:22:02.531 ' 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:22:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.531 --rc genhtml_branch_coverage=1 00:22:02.531 --rc genhtml_function_coverage=1 00:22:02.531 --rc genhtml_legend=1 00:22:02.531 --rc geninfo_all_blocks=1 00:22:02.531 --rc geninfo_unexecuted_blocks=1 00:22:02.531 00:22:02.531 ' 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:22:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.531 --rc genhtml_branch_coverage=1 00:22:02.531 --rc genhtml_function_coverage=1 00:22:02.531 --rc genhtml_legend=1 00:22:02.531 --rc geninfo_all_blocks=1 00:22:02.531 --rc geninfo_unexecuted_blocks=1 00:22:02.531 00:22:02.531 ' 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:22:02.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.531 --rc genhtml_branch_coverage=1 00:22:02.531 --rc genhtml_function_coverage=1 00:22:02.531 --rc genhtml_legend=1 00:22:02.531 --rc geninfo_all_blocks=1 00:22:02.531 --rc geninfo_unexecuted_blocks=1 00:22:02.531 00:22:02.531 ' 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.531 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.532 09:54:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:09.103 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:09.103 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:09.103 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:09.104 Found net devices under 0000:86:00.0: cvl_0_0 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:09.104 Found net devices under 0000:86:00.1: cvl_0_1 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:22:09.104 00:22:09.104 --- 10.0.0.2 ping statistics --- 00:22:09.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.104 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:22:09.104 00:22:09.104 --- 10.0.0.1 ping statistics --- 00:22:09.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.104 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2994165 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2994165 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2994165 ']' 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.104 09:54:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.104 [2024-11-20 09:54:31.801452] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:22:09.104 [2024-11-20 09:54:31.801496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.104 [2024-11-20 09:54:31.886131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.104 [2024-11-20 09:54:31.931270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.104 [2024-11-20 09:54:31.931310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.104 [2024-11-20 09:54:31.931318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.104 [2024-11-20 09:54:31.931325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.104 [2024-11-20 09:54:31.931330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.104 [2024-11-20 09:54:31.932762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.104 [2024-11-20 09:54:31.932873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.104 [2024-11-20 09:54:31.932890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.104 [2024-11-20 09:54:31.932897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.104 [2024-11-20 09:54:32.042464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.104 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.104 Malloc0 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.105 [2024-11-20 09:54:32.143246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.105 [ 00:22:09.105 { 00:22:09.105 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:09.105 "subtype": "Discovery", 00:22:09.105 "listen_addresses": [ 00:22:09.105 { 00:22:09.105 "trtype": "TCP", 00:22:09.105 "adrfam": "IPv4", 00:22:09.105 "traddr": "10.0.0.2", 00:22:09.105 "trsvcid": "4420" 00:22:09.105 } 00:22:09.105 ], 00:22:09.105 "allow_any_host": true, 00:22:09.105 "hosts": [] 00:22:09.105 }, 00:22:09.105 { 00:22:09.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.105 "subtype": "NVMe", 00:22:09.105 "listen_addresses": [ 00:22:09.105 { 00:22:09.105 "trtype": "TCP", 00:22:09.105 "adrfam": "IPv4", 00:22:09.105 "traddr": "10.0.0.2", 00:22:09.105 "trsvcid": "4420" 00:22:09.105 } 00:22:09.105 ], 00:22:09.105 "allow_any_host": true, 00:22:09.105 "hosts": [], 00:22:09.105 "serial_number": "SPDK00000000000001", 00:22:09.105 "model_number": "SPDK bdev Controller", 00:22:09.105 "max_namespaces": 32, 00:22:09.105 "min_cntlid": 1, 00:22:09.105 "max_cntlid": 65519, 00:22:09.105 "namespaces": [ 00:22:09.105 { 00:22:09.105 "nsid": 1, 00:22:09.105 "bdev_name": "Malloc0", 00:22:09.105 "name": "Malloc0", 00:22:09.105 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:09.105 "eui64": "ABCDEF0123456789", 00:22:09.105 "uuid": "1b21e466-439b-4960-8fc9-4ae307c300cf" 00:22:09.105 } 00:22:09.105 ] 00:22:09.105 } 00:22:09.105 ] 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.105 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:09.105 [2024-11-20 09:54:32.192554] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:22:09.105 [2024-11-20 09:54:32.192587] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994365 ] 00:22:09.105 [2024-11-20 09:54:32.232899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:09.105 [2024-11-20 09:54:32.232944] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:09.105 [2024-11-20 09:54:32.236953] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:09.105 [2024-11-20 09:54:32.236964] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:09.105 [2024-11-20 09:54:32.236974] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:09.105 [2024-11-20 09:54:32.237561] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:09.105 [2024-11-20 09:54:32.237593] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x200e690 0 00:22:09.105 [2024-11-20 09:54:32.251961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:09.105 [2024-11-20 09:54:32.251978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:09.105 [2024-11-20 09:54:32.251983] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:09.105 [2024-11-20 09:54:32.251986] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:09.105 [2024-11-20 09:54:32.252017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.252022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.252026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.105 [2024-11-20 09:54:32.252038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:09.105 [2024-11-20 09:54:32.252054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.105 [2024-11-20 09:54:32.259957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.105 [2024-11-20 09:54:32.259965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.105 [2024-11-20 09:54:32.259968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.259972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.105 [2024-11-20 09:54:32.259981] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:09.105 [2024-11-20 09:54:32.259987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:09.105 [2024-11-20 09:54:32.259992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:09.105 [2024-11-20 09:54:32.260004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.105 [2024-11-20 09:54:32.260018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.105 [2024-11-20 09:54:32.260031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.105 [2024-11-20 09:54:32.260200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.105 [2024-11-20 09:54:32.260206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.105 [2024-11-20 09:54:32.260209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.105 [2024-11-20 09:54:32.260217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:09.105 [2024-11-20 09:54:32.260224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:09.105 [2024-11-20 09:54:32.260231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.105 [2024-11-20 09:54:32.260243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.105 [2024-11-20 09:54:32.260253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.105 [2024-11-20 09:54:32.260317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.105 [2024-11-20 09:54:32.260323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.105 [2024-11-20 09:54:32.260326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.105 [2024-11-20 09:54:32.260334] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:09.105 [2024-11-20 09:54:32.260343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:09.105 [2024-11-20 09:54:32.260349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.105 [2024-11-20 09:54:32.260361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.105 [2024-11-20 09:54:32.260371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.105 [2024-11-20 09:54:32.260431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.105 [2024-11-20 09:54:32.260436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.105 [2024-11-20 09:54:32.260439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.105 [2024-11-20 09:54:32.260447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:09.105 [2024-11-20 09:54:32.260455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.105 [2024-11-20 09:54:32.260462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.105 [2024-11-20 09:54:32.260468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.106 [2024-11-20 09:54:32.260477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.106 [2024-11-20 09:54:32.260554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.106 [2024-11-20 09:54:32.260560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.106 [2024-11-20 09:54:32.260563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.260566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.106 [2024-11-20 09:54:32.260570] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:09.106 [2024-11-20 09:54:32.260574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:09.106 [2024-11-20 09:54:32.260581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:09.106 [2024-11-20 09:54:32.260689] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:09.106 [2024-11-20 09:54:32.260693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:09.106 [2024-11-20 09:54:32.260700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.260704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.260707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.106 [2024-11-20 09:54:32.260713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.106 [2024-11-20 09:54:32.260722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.106 [2024-11-20 09:54:32.260796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.106 [2024-11-20 09:54:32.260801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.106 [2024-11-20 09:54:32.260806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.260810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.106 [2024-11-20 09:54:32.260814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:09.106 [2024-11-20 09:54:32.260822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.260825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.260828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.106 [2024-11-20 09:54:32.260834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.106 [2024-11-20 09:54:32.260843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.106 [2024-11-20 09:54:32.260904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.106 [2024-11-20 09:54:32.260910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.106 [2024-11-20 09:54:32.260913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.260916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.106 [2024-11-20 09:54:32.260920] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:09.106 [2024-11-20 09:54:32.260924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:09.106 [2024-11-20 09:54:32.260931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:09.106 [2024-11-20 09:54:32.260944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:09.106 [2024-11-20 09:54:32.260957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.260960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.106 [2024-11-20 09:54:32.260966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.106 [2024-11-20 09:54:32.260976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.106 [2024-11-20 09:54:32.261070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.106 [2024-11-20 09:54:32.261077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.106 [2024-11-20 09:54:32.261080] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.261083] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x200e690): datao=0, datal=4096, cccid=0 00:22:09.106 [2024-11-20 09:54:32.261087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2070100) on tqpair(0x200e690): expected_datao=0, payload_size=4096 00:22:09.106 [2024-11-20 09:54:32.261091] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.261104] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.261108] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.305954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.106 [2024-11-20 09:54:32.305964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.106 [2024-11-20 09:54:32.305968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.305971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.106 [2024-11-20 09:54:32.305979] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:09.106 [2024-11-20 09:54:32.305986] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:09.106 [2024-11-20 09:54:32.305990] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:09.106 [2024-11-20 09:54:32.305998] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:09.106 [2024-11-20 09:54:32.306003] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:09.106 [2024-11-20 09:54:32.306007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:09.106 [2024-11-20 09:54:32.306017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:09.106 [2024-11-20 09:54:32.306024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.106 [2024-11-20 09:54:32.306039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.106 [2024-11-20 09:54:32.306052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.106 [2024-11-20 09:54:32.306202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.106 [2024-11-20 09:54:32.306208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.106 [2024-11-20 09:54:32.306211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.106 [2024-11-20 09:54:32.306221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x200e690) 00:22:09.106 [2024-11-20 09:54:32.306232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.106 [2024-11-20 09:54:32.306238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x200e690) 00:22:09.106 [2024-11-20 09:54:32.306249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.106 [2024-11-20 09:54:32.306254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x200e690) 00:22:09.106 [2024-11-20 09:54:32.306265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.106 [2024-11-20 09:54:32.306270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.106 [2024-11-20 09:54:32.306281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.106 [2024-11-20 09:54:32.306286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:09.106 [2024-11-20 09:54:32.306294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:09.106 [2024-11-20 09:54:32.306302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x200e690) 00:22:09.106 [2024-11-20 09:54:32.306311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.106 [2024-11-20 09:54:32.306321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070100, cid 0, qid 0 00:22:09.106 [2024-11-20 09:54:32.306326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070280, cid 1, qid 0 00:22:09.106 [2024-11-20 09:54:32.306330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070400, cid 2, qid 0 00:22:09.106 [2024-11-20 09:54:32.306334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.106 [2024-11-20 09:54:32.306338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070700, cid 4, qid 0 00:22:09.106 [2024-11-20 09:54:32.306441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.106 [2024-11-20 09:54:32.306447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.106 [2024-11-20 09:54:32.306450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.106 [2024-11-20 09:54:32.306453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070700) on tqpair=0x200e690 00:22:09.106 [2024-11-20 09:54:32.306460] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:09.107 [2024-11-20 09:54:32.306464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:09.107 [2024-11-20 09:54:32.306475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x200e690) 00:22:09.107 [2024-11-20 09:54:32.306484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.107 [2024-11-20 09:54:32.306494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070700, cid 4, qid 0 00:22:09.107 [2024-11-20 09:54:32.306572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.107 [2024-11-20 09:54:32.306578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.107 [2024-11-20 09:54:32.306581] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306584] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x200e690): datao=0, datal=4096, cccid=4 00:22:09.107 [2024-11-20 09:54:32.306588] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2070700) on tqpair(0x200e690): expected_datao=0, payload_size=4096 00:22:09.107 [2024-11-20 09:54:32.306592] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306598] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306601] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.107 [2024-11-20 09:54:32.306619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.107 [2024-11-20 09:54:32.306622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070700) on tqpair=0x200e690 00:22:09.107 [2024-11-20 09:54:32.306635] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:09.107 [2024-11-20 09:54:32.306655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x200e690) 00:22:09.107 [2024-11-20 09:54:32.306664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.107 [2024-11-20 09:54:32.306673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x200e690) 00:22:09.107 [2024-11-20 09:54:32.306684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.107 [2024-11-20 09:54:32.306697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070700, cid 4, qid 0 00:22:09.107 [2024-11-20 09:54:32.306702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070880, cid 5, qid 0 00:22:09.107 [2024-11-20 09:54:32.306804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.107 [2024-11-20 09:54:32.306810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.107 [2024-11-20 09:54:32.306813] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306816] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x200e690): datao=0, datal=1024, cccid=4 00:22:09.107 [2024-11-20 09:54:32.306820] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2070700) on tqpair(0x200e690): expected_datao=0, payload_size=1024 00:22:09.107 [2024-11-20 09:54:32.306824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306829] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306832] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.107 [2024-11-20 09:54:32.306842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.107 [2024-11-20 09:54:32.306845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.306848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070880) on tqpair=0x200e690 00:22:09.107 [2024-11-20 09:54:32.348104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.107 [2024-11-20 09:54:32.348112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.107 [2024-11-20 09:54:32.348116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070700) on tqpair=0x200e690 00:22:09.107 [2024-11-20 09:54:32.348129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x200e690) 00:22:09.107 [2024-11-20 09:54:32.348139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.107 [2024-11-20 09:54:32.348155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070700, cid 4, qid 0 00:22:09.107 [2024-11-20 09:54:32.348244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.107 [2024-11-20 09:54:32.348250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.107 [2024-11-20 09:54:32.348253] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348256] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x200e690): datao=0, datal=3072, cccid=4 00:22:09.107 [2024-11-20 09:54:32.348260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2070700) on tqpair(0x200e690): expected_datao=0, payload_size=3072 00:22:09.107 [2024-11-20 09:54:32.348263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348269] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348273] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.107 [2024-11-20 09:54:32.348292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.107 [2024-11-20 09:54:32.348297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070700) on tqpair=0x200e690 00:22:09.107 [2024-11-20 09:54:32.348308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x200e690) 00:22:09.107 [2024-11-20 09:54:32.348317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.107 [2024-11-20 09:54:32.348330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070700, cid 4, qid 0 00:22:09.107 [2024-11-20 09:54:32.348403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.107 [2024-11-20 09:54:32.348408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.107 [2024-11-20 09:54:32.348411] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348414] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x200e690): datao=0, datal=8, cccid=4 00:22:09.107 [2024-11-20 09:54:32.348418] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2070700) on tqpair(0x200e690): expected_datao=0, payload_size=8 00:22:09.107 [2024-11-20 09:54:32.348422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348427] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.348430] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.393957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.107 [2024-11-20 09:54:32.393967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.107 [2024-11-20 09:54:32.393970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.107 [2024-11-20 09:54:32.393973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070700) on tqpair=0x200e690 00:22:09.107 ===================================================== 00:22:09.107 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:09.107 ===================================================== 00:22:09.107 Controller Capabilities/Features 00:22:09.107 ================================ 00:22:09.107 Vendor ID: 0000 00:22:09.107 Subsystem Vendor ID: 0000 00:22:09.107 Serial Number: .................... 00:22:09.107 Model Number: ........................................ 00:22:09.107 Firmware Version: 25.01 00:22:09.107 Recommended Arb Burst: 0 00:22:09.107 IEEE OUI Identifier: 00 00 00 00:22:09.107 Multi-path I/O 00:22:09.107 May have multiple subsystem ports: No 00:22:09.107 May have multiple controllers: No 00:22:09.107 Associated with SR-IOV VF: No 00:22:09.107 Max Data Transfer Size: 131072 00:22:09.107 Max Number of Namespaces: 0 00:22:09.107 Max Number of I/O Queues: 1024 00:22:09.107 NVMe Specification Version (VS): 1.3 00:22:09.107 NVMe Specification Version (Identify): 1.3 00:22:09.107 Maximum Queue Entries: 128 00:22:09.107 Contiguous Queues Required: Yes 00:22:09.107 Arbitration Mechanisms Supported 00:22:09.107 Weighted Round Robin: Not Supported 00:22:09.107 Vendor Specific: Not Supported 00:22:09.107 Reset Timeout: 15000 ms 00:22:09.107 Doorbell Stride: 4 bytes 00:22:09.107 NVM Subsystem Reset: Not Supported 00:22:09.107 Command Sets Supported 00:22:09.107 NVM Command Set: Supported 00:22:09.107 Boot Partition: Not Supported 00:22:09.107 Memory Page Size Minimum: 4096 bytes 00:22:09.107 Memory Page Size Maximum: 4096 bytes 00:22:09.107 Persistent Memory Region: Not Supported 00:22:09.107 Optional Asynchronous Events Supported 00:22:09.107 Namespace Attribute Notices: Not Supported 00:22:09.107 Firmware Activation Notices: Not Supported 00:22:09.107 ANA Change Notices: Not Supported 00:22:09.107 PLE Aggregate Log Change Notices: Not Supported 00:22:09.107 LBA Status Info Alert Notices: Not Supported 00:22:09.107 EGE Aggregate Log Change Notices: Not Supported 00:22:09.107 Normal NVM Subsystem Shutdown event: Not Supported 00:22:09.107 Zone Descriptor Change Notices: Not Supported 00:22:09.107 Discovery Log Change Notices: Supported 00:22:09.107 Controller Attributes 00:22:09.107 128-bit Host Identifier: Not Supported 00:22:09.107 Non-Operational Permissive Mode: Not Supported 00:22:09.107 NVM Sets: Not Supported 00:22:09.108 Read Recovery Levels: Not Supported 00:22:09.108 Endurance Groups: Not Supported 00:22:09.108 Predictable Latency Mode: Not Supported 00:22:09.108 Traffic Based Keep ALive: Not Supported 00:22:09.108 Namespace Granularity: Not Supported 00:22:09.108 SQ Associations: Not Supported 00:22:09.108 UUID List: Not Supported 00:22:09.108 Multi-Domain Subsystem: Not Supported 00:22:09.108 Fixed Capacity Management: Not Supported 00:22:09.108 Variable Capacity Management: Not Supported 00:22:09.108 Delete Endurance Group: Not Supported 00:22:09.108 Delete NVM Set: Not Supported 00:22:09.108 Extended LBA Formats Supported: Not Supported 00:22:09.108 Flexible Data Placement Supported: Not Supported 00:22:09.108 00:22:09.108 Controller Memory Buffer Support 00:22:09.108 ================================ 00:22:09.108 Supported: No 00:22:09.108 00:22:09.108 Persistent Memory Region Support 00:22:09.108 ================================ 00:22:09.108 Supported: No 00:22:09.108 00:22:09.108 Admin Command Set Attributes 00:22:09.108 ============================ 00:22:09.108 Security Send/Receive: Not Supported 00:22:09.108 Format NVM: Not Supported 00:22:09.108 Firmware Activate/Download: Not Supported 00:22:09.108 Namespace Management: Not Supported 00:22:09.108 Device Self-Test: Not Supported 00:22:09.108 Directives: Not Supported 00:22:09.108 NVMe-MI: Not Supported 00:22:09.108 Virtualization Management: Not Supported 00:22:09.108 Doorbell Buffer Config: Not Supported 00:22:09.108 Get LBA Status Capability: Not Supported 00:22:09.108 Command & Feature Lockdown Capability: Not Supported 00:22:09.108 Abort Command Limit: 1 00:22:09.108 Async Event Request Limit: 4 00:22:09.108 Number of Firmware Slots: N/A 00:22:09.108 Firmware Slot 1 Read-Only: N/A 00:22:09.108 Firmware Activation Without Reset: N/A 00:22:09.108 Multiple Update Detection Support: N/A 00:22:09.108 Firmware Update Granularity: No Information Provided 00:22:09.108 Per-Namespace SMART Log: No 00:22:09.108 Asymmetric Namespace Access Log Page: Not Supported 00:22:09.108 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:09.108 Command Effects Log Page: Not Supported 00:22:09.108 Get Log Page Extended Data: Supported 00:22:09.108 Telemetry Log Pages: Not Supported 00:22:09.108 Persistent Event Log Pages: Not Supported 00:22:09.108 Supported Log Pages Log Page: May Support 00:22:09.108 Commands Supported & Effects Log Page: Not Supported 00:22:09.108 Feature Identifiers & Effects Log Page:May Support 00:22:09.108 NVMe-MI Commands & Effects Log Page: May Support 00:22:09.108 Data Area 4 for Telemetry Log: Not Supported 00:22:09.108 Error Log Page Entries Supported: 128 00:22:09.108 Keep Alive: Not Supported 00:22:09.108 00:22:09.108 NVM Command Set Attributes 00:22:09.108 ========================== 00:22:09.108 Submission Queue Entry Size 00:22:09.108 Max: 1 00:22:09.108 Min: 1 00:22:09.108 Completion Queue Entry Size 00:22:09.108 Max: 1 00:22:09.108 Min: 1 00:22:09.108 Number of Namespaces: 0 00:22:09.108 Compare Command: Not Supported 00:22:09.108 Write Uncorrectable Command: Not Supported 00:22:09.108 Dataset Management Command: Not Supported 00:22:09.108 Write Zeroes Command: Not Supported 00:22:09.108 Set Features Save Field: Not Supported 00:22:09.108 Reservations: Not Supported 00:22:09.108 Timestamp: Not Supported 00:22:09.108 Copy: Not Supported 00:22:09.108 Volatile Write Cache: Not Present 00:22:09.108 Atomic Write Unit (Normal): 1 00:22:09.108 Atomic Write Unit (PFail): 1 00:22:09.108 Atomic Compare & Write Unit: 1 00:22:09.108 Fused Compare & Write: Supported 00:22:09.108 Scatter-Gather List 00:22:09.108 SGL Command Set: Supported 00:22:09.108 SGL Keyed: Supported 00:22:09.108 SGL Bit Bucket Descriptor: Not Supported 00:22:09.108 SGL Metadata Pointer: Not Supported 00:22:09.108 Oversized SGL: Not Supported 00:22:09.108 SGL Metadata Address: Not Supported 00:22:09.108 SGL Offset: Supported 00:22:09.108 Transport SGL Data Block: Not Supported 00:22:09.108 Replay Protected Memory Block: Not Supported 00:22:09.108 00:22:09.108 Firmware Slot Information 00:22:09.108 ========================= 00:22:09.108 Active slot: 0 00:22:09.108 00:22:09.108 00:22:09.108 Error Log 00:22:09.108 ========= 00:22:09.108 00:22:09.108 Active Namespaces 00:22:09.108 ================= 00:22:09.108 Discovery Log Page 00:22:09.108 ================== 00:22:09.108 Generation Counter: 2 00:22:09.108 Number of Records: 2 00:22:09.108 Record Format: 0 00:22:09.108 00:22:09.108 Discovery Log Entry 0 00:22:09.108 ---------------------- 00:22:09.108 Transport Type: 3 (TCP) 00:22:09.108 Address Family: 1 (IPv4) 00:22:09.108 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:09.108 Entry Flags: 00:22:09.108 Duplicate Returned Information: 1 00:22:09.108 Explicit Persistent Connection Support for Discovery: 1 00:22:09.108 Transport Requirements: 00:22:09.108 Secure Channel: Not Required 00:22:09.108 Port ID: 0 (0x0000) 00:22:09.108 Controller ID: 65535 (0xffff) 00:22:09.108 Admin Max SQ Size: 128 00:22:09.108 Transport Service Identifier: 4420 00:22:09.108 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:09.108 Transport Address: 10.0.0.2 00:22:09.108 Discovery Log Entry 1 00:22:09.108 ---------------------- 00:22:09.108 Transport Type: 3 (TCP) 00:22:09.108 Address Family: 1 (IPv4) 00:22:09.108 Subsystem Type: 2 (NVM Subsystem) 00:22:09.108 Entry Flags: 00:22:09.108 Duplicate Returned Information: 0 00:22:09.108 Explicit Persistent Connection Support for Discovery: 0 00:22:09.108 Transport Requirements: 00:22:09.108 Secure Channel: Not Required 00:22:09.108 Port ID: 0 (0x0000) 00:22:09.108 Controller ID: 65535 (0xffff) 00:22:09.108 Admin Max SQ Size: 128 00:22:09.108 Transport Service Identifier: 4420 00:22:09.108 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:09.108 Transport Address: 10.0.0.2 [2024-11-20 09:54:32.394056] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:09.108 [2024-11-20 09:54:32.394066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070100) on tqpair=0x200e690 00:22:09.108 [2024-11-20 09:54:32.394073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.108 [2024-11-20 09:54:32.394078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070280) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.109 [2024-11-20 09:54:32.394087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070400) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.109 [2024-11-20 09:54:32.394096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.109 [2024-11-20 09:54:32.394110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.394124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.394137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.394197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.394203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.394207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.394229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.394241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.394315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.394321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.394324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394332] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:09.109 [2024-11-20 09:54:32.394336] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:09.109 [2024-11-20 09:54:32.394344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.394357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.394366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.394427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.394432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.394435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.394460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.394470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.394542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.394548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.394551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.394575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.394585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.394647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.394653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.394656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.394681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.394691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.394757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.394763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.394766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.394791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.394800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.394860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.394866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.394869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.394893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.394902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.394973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.394980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.394983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.394994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.394998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.395001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.395007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.395017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.395078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.395085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.395089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.395092] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.395101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.395104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.395108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.109 [2024-11-20 09:54:32.395113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.109 [2024-11-20 09:54:32.395123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.109 [2024-11-20 09:54:32.395181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.109 [2024-11-20 09:54:32.395187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.109 [2024-11-20 09:54:32.395190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.395193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.109 [2024-11-20 09:54:32.395202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.109 [2024-11-20 09:54:32.395205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.395214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.395223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.395283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.395289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.395292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.395304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.395317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.395326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.395390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.395396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.395399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.395410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.395423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.395432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.395498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.395504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.395509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.395520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.395533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.395542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.395606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.395612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.395615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.395627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.395639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.395649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.395712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.395718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.395721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.395733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.395745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.395754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.395812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.395818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.395820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.395832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.395845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.395854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.395912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.395917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.395920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.395933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.395940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.395945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.395958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.396022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.396027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.396030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.396042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.396054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.396063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.396123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.396128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.396131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.396143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.396155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.396164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.396226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.396231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.396234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.396246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.396258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.396268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.396335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.110 [2024-11-20 09:54:32.396340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.110 [2024-11-20 09:54:32.396343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.110 [2024-11-20 09:54:32.396356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.110 [2024-11-20 09:54:32.396363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.110 [2024-11-20 09:54:32.396369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.110 [2024-11-20 09:54:32.396378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.110 [2024-11-20 09:54:32.396447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.396452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.396455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.396466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.396478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.396488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.396550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.396555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.396558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.396569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.396581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.396590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.396649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.396654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.396657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.396669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.396680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.396689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.396757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.396763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.396766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.396777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.396792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.396801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.396867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.396872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.396875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.396886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.396898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.396907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.396974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.396980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.396983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.396994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.396997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.397006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.397015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.397075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.397081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.397084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.397095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.397107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.397116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.397188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.397193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.397196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397199] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.397208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.397222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.397231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.397295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.397300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.397303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.397314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.397327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.397335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.397398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.397403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.397406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.397418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.397430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.397439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.397501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.397506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.397509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.397520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.397532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.111 [2024-11-20 09:54:32.397541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.111 [2024-11-20 09:54:32.397601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.111 [2024-11-20 09:54:32.397606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.111 [2024-11-20 09:54:32.397609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.111 [2024-11-20 09:54:32.397621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.111 [2024-11-20 09:54:32.397627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.111 [2024-11-20 09:54:32.397634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.112 [2024-11-20 09:54:32.397643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.112 [2024-11-20 09:54:32.397711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.112 [2024-11-20 09:54:32.397717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.112 [2024-11-20 09:54:32.397720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.397723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.112 [2024-11-20 09:54:32.397731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.397735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.397738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.112 [2024-11-20 09:54:32.397743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.112 [2024-11-20 09:54:32.397752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.112 [2024-11-20 09:54:32.397814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.112 [2024-11-20 09:54:32.397820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.112 [2024-11-20 09:54:32.397823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.397826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.112 [2024-11-20 09:54:32.397834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.397838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.397841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.112 [2024-11-20 09:54:32.397846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.112 [2024-11-20 09:54:32.397855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.112 [2024-11-20 09:54:32.397920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.112 [2024-11-20 09:54:32.397926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.112 [2024-11-20 09:54:32.397928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.397932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.112 [2024-11-20 09:54:32.397940] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.397943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.401951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x200e690) 00:22:09.112 [2024-11-20 09:54:32.401959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.112 [2024-11-20 09:54:32.401970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2070580, cid 3, qid 0 00:22:09.112 [2024-11-20 09:54:32.402123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.112 [2024-11-20 09:54:32.402128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.112 [2024-11-20 09:54:32.402131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.112 [2024-11-20 09:54:32.402135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2070580) on tqpair=0x200e690 00:22:09.112 [2024-11-20 09:54:32.402141] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:22:09.112 00:22:09.112 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:09.375 [2024-11-20 09:54:32.437767] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:22:09.375 [2024-11-20 09:54:32.437801] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994369 ] 00:22:09.375 [2024-11-20 09:54:32.474633] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:09.375 [2024-11-20 09:54:32.474672] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:09.375 [2024-11-20 09:54:32.474676] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:09.375 [2024-11-20 09:54:32.474687] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:09.375 [2024-11-20 09:54:32.474695] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:09.375 [2024-11-20 09:54:32.482125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:09.375 [2024-11-20 09:54:32.482159] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x613690 0 00:22:09.375 [2024-11-20 09:54:32.482312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:09.375 [2024-11-20 09:54:32.482319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:09.375 [2024-11-20 09:54:32.482322] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:09.375 [2024-11-20 09:54:32.482325] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:09.375 [2024-11-20 09:54:32.482347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.482351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.482355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.375 [2024-11-20 09:54:32.482364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:09.375 [2024-11-20 09:54:32.482376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.375 [2024-11-20 09:54:32.489959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.375 [2024-11-20 09:54:32.489968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.375 [2024-11-20 09:54:32.489971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.489975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.375 [2024-11-20 09:54:32.489985] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:09.375 [2024-11-20 09:54:32.489991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:09.375 [2024-11-20 09:54:32.489996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:09.375 [2024-11-20 09:54:32.490006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.375 [2024-11-20 09:54:32.490020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.375 [2024-11-20 09:54:32.490033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.375 [2024-11-20 09:54:32.490190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.375 [2024-11-20 09:54:32.490196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.375 [2024-11-20 09:54:32.490201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.375 [2024-11-20 09:54:32.490209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:09.375 [2024-11-20 09:54:32.490215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:09.375 [2024-11-20 09:54:32.490222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.375 [2024-11-20 09:54:32.490235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.375 [2024-11-20 09:54:32.490244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.375 [2024-11-20 09:54:32.490306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.375 [2024-11-20 09:54:32.490311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.375 [2024-11-20 09:54:32.490315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.375 [2024-11-20 09:54:32.490322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:09.375 [2024-11-20 09:54:32.490329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:09.375 [2024-11-20 09:54:32.490335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.375 [2024-11-20 09:54:32.490347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.375 [2024-11-20 09:54:32.490356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.375 [2024-11-20 09:54:32.490418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.375 [2024-11-20 09:54:32.490423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.375 [2024-11-20 09:54:32.490427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.375 [2024-11-20 09:54:32.490434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:09.375 [2024-11-20 09:54:32.490442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.375 [2024-11-20 09:54:32.490455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.375 [2024-11-20 09:54:32.490464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.375 [2024-11-20 09:54:32.490528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.375 [2024-11-20 09:54:32.490534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.375 [2024-11-20 09:54:32.490537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.375 [2024-11-20 09:54:32.490544] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:09.375 [2024-11-20 09:54:32.490550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:09.375 [2024-11-20 09:54:32.490557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:09.375 [2024-11-20 09:54:32.490664] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:09.375 [2024-11-20 09:54:32.490668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:09.375 [2024-11-20 09:54:32.490676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.375 [2024-11-20 09:54:32.490688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.375 [2024-11-20 09:54:32.490698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.375 [2024-11-20 09:54:32.490760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.375 [2024-11-20 09:54:32.490765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.375 [2024-11-20 09:54:32.490768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.375 [2024-11-20 09:54:32.490776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:09.375 [2024-11-20 09:54:32.490784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.375 [2024-11-20 09:54:32.490790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.375 [2024-11-20 09:54:32.490796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.376 [2024-11-20 09:54:32.490806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.376 [2024-11-20 09:54:32.490870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.376 [2024-11-20 09:54:32.490875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.376 [2024-11-20 09:54:32.490878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.490881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.376 [2024-11-20 09:54:32.490885] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:09.376 [2024-11-20 09:54:32.490889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.490896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:09.376 [2024-11-20 09:54:32.490906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.490914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.490917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.376 [2024-11-20 09:54:32.490923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.376 [2024-11-20 09:54:32.490933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.376 [2024-11-20 09:54:32.491027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.376 [2024-11-20 09:54:32.491033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.376 [2024-11-20 09:54:32.491037] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491040] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613690): datao=0, datal=4096, cccid=0 00:22:09.376 [2024-11-20 09:54:32.491044] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x675100) on tqpair(0x613690): expected_datao=0, payload_size=4096 00:22:09.376 [2024-11-20 09:54:32.491048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491064] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491068] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.376 [2024-11-20 09:54:32.491107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.376 [2024-11-20 09:54:32.491110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.376 [2024-11-20 09:54:32.491120] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:09.376 [2024-11-20 09:54:32.491124] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:09.376 [2024-11-20 09:54:32.491128] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:09.376 [2024-11-20 09:54:32.491134] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:09.376 [2024-11-20 09:54:32.491138] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:09.376 [2024-11-20 09:54:32.491142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.491151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.491157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.376 [2024-11-20 09:54:32.491169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.376 [2024-11-20 09:54:32.491180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.376 [2024-11-20 09:54:32.491243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.376 [2024-11-20 09:54:32.491248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.376 [2024-11-20 09:54:32.491251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.376 [2024-11-20 09:54:32.491260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x613690) 00:22:09.376 [2024-11-20 09:54:32.491272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.376 [2024-11-20 09:54:32.491277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x613690) 00:22:09.376 [2024-11-20 09:54:32.491290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.376 [2024-11-20 09:54:32.491295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x613690) 00:22:09.376 [2024-11-20 09:54:32.491307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.376 [2024-11-20 09:54:32.491312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.376 [2024-11-20 09:54:32.491323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.376 [2024-11-20 09:54:32.491327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.491335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.491340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613690) 00:22:09.376 [2024-11-20 09:54:32.491349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.376 [2024-11-20 09:54:32.491360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675100, cid 0, qid 0 00:22:09.376 [2024-11-20 09:54:32.491365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675280, cid 1, qid 0 00:22:09.376 [2024-11-20 09:54:32.491369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675400, cid 2, qid 0 00:22:09.376 [2024-11-20 09:54:32.491373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.376 [2024-11-20 09:54:32.491377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675700, cid 4, qid 0 00:22:09.376 [2024-11-20 09:54:32.491478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.376 [2024-11-20 09:54:32.491484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.376 [2024-11-20 09:54:32.491487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675700) on tqpair=0x613690 00:22:09.376 [2024-11-20 09:54:32.491496] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:09.376 [2024-11-20 09:54:32.491500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.491508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.491513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.491518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613690) 00:22:09.376 [2024-11-20 09:54:32.491530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:09.376 [2024-11-20 09:54:32.491541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675700, cid 4, qid 0 00:22:09.376 [2024-11-20 09:54:32.491603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.376 [2024-11-20 09:54:32.491609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.376 [2024-11-20 09:54:32.491612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675700) on tqpair=0x613690 00:22:09.376 [2024-11-20 09:54:32.491668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.491678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:09.376 [2024-11-20 09:54:32.491684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613690) 00:22:09.376 [2024-11-20 09:54:32.491693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.376 [2024-11-20 09:54:32.491703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675700, cid 4, qid 0 00:22:09.376 [2024-11-20 09:54:32.491778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.376 [2024-11-20 09:54:32.491784] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.376 [2024-11-20 09:54:32.491787] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491791] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613690): datao=0, datal=4096, cccid=4 00:22:09.376 [2024-11-20 09:54:32.491795] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x675700) on tqpair(0x613690): expected_datao=0, payload_size=4096 00:22:09.376 [2024-11-20 09:54:32.491798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491809] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.376 [2024-11-20 09:54:32.491813] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.533073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.377 [2024-11-20 09:54:32.533083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.377 [2024-11-20 09:54:32.533087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.533090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675700) on tqpair=0x613690 00:22:09.377 [2024-11-20 09:54:32.533099] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:09.377 [2024-11-20 09:54:32.533108] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.533116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.533123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.533126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613690) 00:22:09.377 [2024-11-20 09:54:32.533132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.377 [2024-11-20 09:54:32.533144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675700, cid 4, qid 0 00:22:09.377 [2024-11-20 09:54:32.533225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.377 [2024-11-20 09:54:32.533231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.377 [2024-11-20 09:54:32.533234] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.533237] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613690): datao=0, datal=4096, cccid=4 00:22:09.377 [2024-11-20 09:54:32.533241] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x675700) on tqpair(0x613690): expected_datao=0, payload_size=4096 00:22:09.377 [2024-11-20 09:54:32.533247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.533258] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.533261] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.577953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.377 [2024-11-20 09:54:32.577962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.377 [2024-11-20 09:54:32.577965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.577969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675700) on tqpair=0x613690 00:22:09.377 [2024-11-20 09:54:32.577981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.577991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.577998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.578002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613690) 00:22:09.377 [2024-11-20 09:54:32.578008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.377 [2024-11-20 09:54:32.578020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675700, cid 4, qid 0 00:22:09.377 [2024-11-20 09:54:32.578176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.377 [2024-11-20 09:54:32.578182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.377 [2024-11-20 09:54:32.578185] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.578189] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613690): datao=0, datal=4096, cccid=4 00:22:09.377 [2024-11-20 09:54:32.578193] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x675700) on tqpair(0x613690): expected_datao=0, payload_size=4096 00:22:09.377 [2024-11-20 09:54:32.578196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.578207] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.578210] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.377 [2024-11-20 09:54:32.620098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.377 [2024-11-20 09:54:32.620101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675700) on tqpair=0x613690 00:22:09.377 [2024-11-20 09:54:32.620114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.620122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.620130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.620136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.620141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.620145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.620150] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:09.377 [2024-11-20 09:54:32.620156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:09.377 [2024-11-20 09:54:32.620161] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:09.377 [2024-11-20 09:54:32.620175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613690) 00:22:09.377 [2024-11-20 09:54:32.620186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.377 [2024-11-20 09:54:32.620192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613690) 00:22:09.377 [2024-11-20 09:54:32.620204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.377 [2024-11-20 09:54:32.620218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675700, cid 4, qid 0 00:22:09.377 [2024-11-20 09:54:32.620223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675880, cid 5, qid 0 00:22:09.377 [2024-11-20 09:54:32.620300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.377 [2024-11-20 09:54:32.620306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.377 [2024-11-20 09:54:32.620309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675700) on tqpair=0x613690 00:22:09.377 [2024-11-20 09:54:32.620318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.377 [2024-11-20 09:54:32.620323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.377 [2024-11-20 09:54:32.620326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675880) on tqpair=0x613690 00:22:09.377 [2024-11-20 09:54:32.620337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613690) 00:22:09.377 [2024-11-20 09:54:32.620346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.377 [2024-11-20 09:54:32.620356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675880, cid 5, qid 0 00:22:09.377 [2024-11-20 09:54:32.620420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.377 [2024-11-20 09:54:32.620426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.377 [2024-11-20 09:54:32.620429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675880) on tqpair=0x613690 00:22:09.377 [2024-11-20 09:54:32.620440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613690) 00:22:09.377 [2024-11-20 09:54:32.620449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.377 [2024-11-20 09:54:32.620458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675880, cid 5, qid 0 00:22:09.377 [2024-11-20 09:54:32.620529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.377 [2024-11-20 09:54:32.620534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.377 [2024-11-20 09:54:32.620537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675880) on tqpair=0x613690 00:22:09.377 [2024-11-20 09:54:32.620551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613690) 00:22:09.377 [2024-11-20 09:54:32.620560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.377 [2024-11-20 09:54:32.620569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675880, cid 5, qid 0 00:22:09.377 [2024-11-20 09:54:32.620637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.377 [2024-11-20 09:54:32.620642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.377 [2024-11-20 09:54:32.620645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675880) on tqpair=0x613690 00:22:09.377 [2024-11-20 09:54:32.620661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x613690) 00:22:09.377 [2024-11-20 09:54:32.620671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.377 [2024-11-20 09:54:32.620677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x613690) 00:22:09.377 [2024-11-20 09:54:32.620686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.377 [2024-11-20 09:54:32.620692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.377 [2024-11-20 09:54:32.620695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x613690) 00:22:09.378 [2024-11-20 09:54:32.620700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.378 [2024-11-20 09:54:32.620706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.620710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613690) 00:22:09.378 [2024-11-20 09:54:32.620715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.378 [2024-11-20 09:54:32.620725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675880, cid 5, qid 0 00:22:09.378 [2024-11-20 09:54:32.620730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675700, cid 4, qid 0 00:22:09.378 [2024-11-20 09:54:32.620734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675a00, cid 6, qid 0 00:22:09.378 [2024-11-20 09:54:32.620738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675b80, cid 7, qid 0 00:22:09.378 [2024-11-20 09:54:32.620875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.378 [2024-11-20 09:54:32.620881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.378 [2024-11-20 09:54:32.620884] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.620888] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613690): datao=0, datal=8192, cccid=5 00:22:09.378 [2024-11-20 09:54:32.620892] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x675880) on tqpair(0x613690): expected_datao=0, payload_size=8192 00:22:09.378 [2024-11-20 09:54:32.620895] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.620924] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.620928] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.620933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.378 [2024-11-20 09:54:32.620938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.378 [2024-11-20 09:54:32.620944] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.622983] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613690): datao=0, datal=512, cccid=4 00:22:09.378 [2024-11-20 09:54:32.622988] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x675700) on tqpair(0x613690): expected_datao=0, payload_size=512 00:22:09.378 [2024-11-20 09:54:32.622992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.622997] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623001] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.378 [2024-11-20 09:54:32.623011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.378 [2024-11-20 09:54:32.623013] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623017] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613690): datao=0, datal=512, cccid=6 00:22:09.378 [2024-11-20 09:54:32.623021] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x675a00) on tqpair(0x613690): expected_datao=0, payload_size=512 00:22:09.378 [2024-11-20 09:54:32.623024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623030] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623033] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:09.378 [2024-11-20 09:54:32.623042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:09.378 [2024-11-20 09:54:32.623045] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623048] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x613690): datao=0, datal=4096, cccid=7 00:22:09.378 [2024-11-20 09:54:32.623052] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x675b80) on tqpair(0x613690): expected_datao=0, payload_size=4096 00:22:09.378 [2024-11-20 09:54:32.623056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623061] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623065] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.378 [2024-11-20 09:54:32.623077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.378 [2024-11-20 09:54:32.623080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675880) on tqpair=0x613690 00:22:09.378 [2024-11-20 09:54:32.623095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.378 [2024-11-20 09:54:32.623100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.378 [2024-11-20 09:54:32.623103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675700) on tqpair=0x613690 00:22:09.378 [2024-11-20 09:54:32.623114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.378 [2024-11-20 09:54:32.623119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.378 [2024-11-20 09:54:32.623123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675a00) on tqpair=0x613690 00:22:09.378 [2024-11-20 09:54:32.623132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.378 [2024-11-20 09:54:32.623137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.378 [2024-11-20 09:54:32.623140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.378 [2024-11-20 09:54:32.623143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675b80) on tqpair=0x613690 00:22:09.378 ===================================================== 00:22:09.378 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.378 ===================================================== 00:22:09.378 Controller Capabilities/Features 00:22:09.378 ================================ 00:22:09.378 Vendor ID: 8086 00:22:09.378 Subsystem Vendor ID: 8086 00:22:09.378 Serial Number: SPDK00000000000001 00:22:09.378 Model Number: SPDK bdev Controller 00:22:09.378 Firmware Version: 25.01 00:22:09.378 Recommended Arb Burst: 6 00:22:09.378 IEEE OUI Identifier: e4 d2 5c 00:22:09.378 Multi-path I/O 00:22:09.378 May have multiple subsystem ports: Yes 00:22:09.378 May have multiple controllers: Yes 00:22:09.378 Associated with SR-IOV VF: No 00:22:09.378 Max Data Transfer Size: 131072 00:22:09.378 Max Number of Namespaces: 32 00:22:09.378 Max Number of I/O Queues: 127 00:22:09.378 NVMe Specification Version (VS): 1.3 00:22:09.378 NVMe Specification Version (Identify): 1.3 00:22:09.378 Maximum Queue Entries: 128 00:22:09.378 Contiguous Queues Required: Yes 00:22:09.378 Arbitration Mechanisms Supported 00:22:09.378 Weighted Round Robin: Not Supported 00:22:09.378 Vendor Specific: Not Supported 00:22:09.378 Reset Timeout: 15000 ms 00:22:09.378 Doorbell Stride: 4 bytes 00:22:09.378 NVM Subsystem Reset: Not Supported 00:22:09.378 Command Sets Supported 00:22:09.378 NVM Command Set: Supported 00:22:09.378 Boot Partition: Not Supported 00:22:09.378 Memory Page Size Minimum: 4096 bytes 00:22:09.378 Memory Page Size Maximum: 4096 bytes 00:22:09.378 Persistent Memory Region: Not Supported 00:22:09.378 Optional Asynchronous Events Supported 00:22:09.378 Namespace Attribute Notices: Supported 00:22:09.378 Firmware Activation Notices: Not Supported 00:22:09.378 ANA Change Notices: Not Supported 00:22:09.378 PLE Aggregate Log Change Notices: Not Supported 00:22:09.378 LBA Status Info Alert Notices: Not Supported 00:22:09.378 EGE Aggregate Log Change Notices: Not Supported 00:22:09.378 Normal NVM Subsystem Shutdown event: Not Supported 00:22:09.378 Zone Descriptor Change Notices: Not Supported 00:22:09.378 Discovery Log Change Notices: Not Supported 00:22:09.378 Controller Attributes 00:22:09.378 128-bit Host Identifier: Supported 00:22:09.378 Non-Operational Permissive Mode: Not Supported 00:22:09.378 NVM Sets: Not Supported 00:22:09.378 Read Recovery Levels: Not Supported 00:22:09.378 Endurance Groups: Not Supported 00:22:09.378 Predictable Latency Mode: Not Supported 00:22:09.378 Traffic Based Keep ALive: Not Supported 00:22:09.378 Namespace Granularity: Not Supported 00:22:09.378 SQ Associations: Not Supported 00:22:09.378 UUID List: Not Supported 00:22:09.378 Multi-Domain Subsystem: Not Supported 00:22:09.378 Fixed Capacity Management: Not Supported 00:22:09.378 Variable Capacity Management: Not Supported 00:22:09.378 Delete Endurance Group: Not Supported 00:22:09.378 Delete NVM Set: Not Supported 00:22:09.378 Extended LBA Formats Supported: Not Supported 00:22:09.378 Flexible Data Placement Supported: Not Supported 00:22:09.378 00:22:09.378 Controller Memory Buffer Support 00:22:09.378 ================================ 00:22:09.378 Supported: No 00:22:09.378 00:22:09.378 Persistent Memory Region Support 00:22:09.378 ================================ 00:22:09.378 Supported: No 00:22:09.378 00:22:09.378 Admin Command Set Attributes 00:22:09.378 ============================ 00:22:09.378 Security Send/Receive: Not Supported 00:22:09.378 Format NVM: Not Supported 00:22:09.378 Firmware Activate/Download: Not Supported 00:22:09.378 Namespace Management: Not Supported 00:22:09.378 Device Self-Test: Not Supported 00:22:09.378 Directives: Not Supported 00:22:09.378 NVMe-MI: Not Supported 00:22:09.378 Virtualization Management: Not Supported 00:22:09.378 Doorbell Buffer Config: Not Supported 00:22:09.378 Get LBA Status Capability: Not Supported 00:22:09.378 Command & Feature Lockdown Capability: Not Supported 00:22:09.378 Abort Command Limit: 4 00:22:09.378 Async Event Request Limit: 4 00:22:09.378 Number of Firmware Slots: N/A 00:22:09.378 Firmware Slot 1 Read-Only: N/A 00:22:09.378 Firmware Activation Without Reset: N/A 00:22:09.379 Multiple Update Detection Support: N/A 00:22:09.379 Firmware Update Granularity: No Information Provided 00:22:09.379 Per-Namespace SMART Log: No 00:22:09.379 Asymmetric Namespace Access Log Page: Not Supported 00:22:09.379 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:09.379 Command Effects Log Page: Supported 00:22:09.379 Get Log Page Extended Data: Supported 00:22:09.379 Telemetry Log Pages: Not Supported 00:22:09.379 Persistent Event Log Pages: Not Supported 00:22:09.379 Supported Log Pages Log Page: May Support 00:22:09.379 Commands Supported & Effects Log Page: Not Supported 00:22:09.379 Feature Identifiers & Effects Log Page:May Support 00:22:09.379 NVMe-MI Commands & Effects Log Page: May Support 00:22:09.379 Data Area 4 for Telemetry Log: Not Supported 00:22:09.379 Error Log Page Entries Supported: 128 00:22:09.379 Keep Alive: Supported 00:22:09.379 Keep Alive Granularity: 10000 ms 00:22:09.379 00:22:09.379 NVM Command Set Attributes 00:22:09.379 ========================== 00:22:09.379 Submission Queue Entry Size 00:22:09.379 Max: 64 00:22:09.379 Min: 64 00:22:09.379 Completion Queue Entry Size 00:22:09.379 Max: 16 00:22:09.379 Min: 16 00:22:09.379 Number of Namespaces: 32 00:22:09.379 Compare Command: Supported 00:22:09.379 Write Uncorrectable Command: Not Supported 00:22:09.379 Dataset Management Command: Supported 00:22:09.379 Write Zeroes Command: Supported 00:22:09.379 Set Features Save Field: Not Supported 00:22:09.379 Reservations: Supported 00:22:09.379 Timestamp: Not Supported 00:22:09.379 Copy: Supported 00:22:09.379 Volatile Write Cache: Present 00:22:09.379 Atomic Write Unit (Normal): 1 00:22:09.379 Atomic Write Unit (PFail): 1 00:22:09.379 Atomic Compare & Write Unit: 1 00:22:09.379 Fused Compare & Write: Supported 00:22:09.379 Scatter-Gather List 00:22:09.379 SGL Command Set: Supported 00:22:09.379 SGL Keyed: Supported 00:22:09.379 SGL Bit Bucket Descriptor: Not Supported 00:22:09.379 SGL Metadata Pointer: Not Supported 00:22:09.379 Oversized SGL: Not Supported 00:22:09.379 SGL Metadata Address: Not Supported 00:22:09.379 SGL Offset: Supported 00:22:09.379 Transport SGL Data Block: Not Supported 00:22:09.379 Replay Protected Memory Block: Not Supported 00:22:09.379 00:22:09.379 Firmware Slot Information 00:22:09.379 ========================= 00:22:09.379 Active slot: 1 00:22:09.379 Slot 1 Firmware Revision: 25.01 00:22:09.379 00:22:09.379 00:22:09.379 Commands Supported and Effects 00:22:09.379 ============================== 00:22:09.379 Admin Commands 00:22:09.379 -------------- 00:22:09.379 Get Log Page (02h): Supported 00:22:09.379 Identify (06h): Supported 00:22:09.379 Abort (08h): Supported 00:22:09.379 Set Features (09h): Supported 00:22:09.379 Get Features (0Ah): Supported 00:22:09.379 Asynchronous Event Request (0Ch): Supported 00:22:09.379 Keep Alive (18h): Supported 00:22:09.379 I/O Commands 00:22:09.379 ------------ 00:22:09.379 Flush (00h): Supported LBA-Change 00:22:09.379 Write (01h): Supported LBA-Change 00:22:09.379 Read (02h): Supported 00:22:09.379 Compare (05h): Supported 00:22:09.379 Write Zeroes (08h): Supported LBA-Change 00:22:09.379 Dataset Management (09h): Supported LBA-Change 00:22:09.379 Copy (19h): Supported LBA-Change 00:22:09.379 00:22:09.379 Error Log 00:22:09.379 ========= 00:22:09.379 00:22:09.379 Arbitration 00:22:09.379 =========== 00:22:09.379 Arbitration Burst: 1 00:22:09.379 00:22:09.379 Power Management 00:22:09.379 ================ 00:22:09.379 Number of Power States: 1 00:22:09.379 Current Power State: Power State #0 00:22:09.379 Power State #0: 00:22:09.379 Max Power: 0.00 W 00:22:09.379 Non-Operational State: Operational 00:22:09.379 Entry Latency: Not Reported 00:22:09.379 Exit Latency: Not Reported 00:22:09.379 Relative Read Throughput: 0 00:22:09.379 Relative Read Latency: 0 00:22:09.379 Relative Write Throughput: 0 00:22:09.379 Relative Write Latency: 0 00:22:09.379 Idle Power: Not Reported 00:22:09.379 Active Power: Not Reported 00:22:09.379 Non-Operational Permissive Mode: Not Supported 00:22:09.379 00:22:09.379 Health Information 00:22:09.379 ================== 00:22:09.379 Critical Warnings: 00:22:09.379 Available Spare Space: OK 00:22:09.379 Temperature: OK 00:22:09.379 Device Reliability: OK 00:22:09.379 Read Only: No 00:22:09.379 Volatile Memory Backup: OK 00:22:09.379 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:09.379 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:09.379 Available Spare: 0% 00:22:09.379 Available Spare Threshold: 0% 00:22:09.379 Life Percentage Used:[2024-11-20 09:54:32.623230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.379 [2024-11-20 09:54:32.623235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x613690) 00:22:09.379 [2024-11-20 09:54:32.623242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.379 [2024-11-20 09:54:32.623254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675b80, cid 7, qid 0 00:22:09.379 [2024-11-20 09:54:32.623417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.379 [2024-11-20 09:54:32.623423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.379 [2024-11-20 09:54:32.623425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.379 [2024-11-20 09:54:32.623429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675b80) on tqpair=0x613690 00:22:09.379 [2024-11-20 09:54:32.623457] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:09.379 [2024-11-20 09:54:32.623466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675100) on tqpair=0x613690 00:22:09.379 [2024-11-20 09:54:32.623472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.379 [2024-11-20 09:54:32.623476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675280) on tqpair=0x613690 00:22:09.379 [2024-11-20 09:54:32.623481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.379 [2024-11-20 09:54:32.623485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675400) on tqpair=0x613690 00:22:09.379 [2024-11-20 09:54:32.623489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.379 [2024-11-20 09:54:32.623493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.379 [2024-11-20 09:54:32.623497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.379 [2024-11-20 09:54:32.623504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.379 [2024-11-20 09:54:32.623508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.379 [2024-11-20 09:54:32.623511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.379 [2024-11-20 09:54:32.623516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.379 [2024-11-20 09:54:32.623527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.379 [2024-11-20 09:54:32.623595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.379 [2024-11-20 09:54:32.623600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.379 [2024-11-20 09:54:32.623603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.623612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.623624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.623636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.623712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.623717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.623720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.623731] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:09.380 [2024-11-20 09:54:32.623735] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:09.380 [2024-11-20 09:54:32.623743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.623755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.623765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.623826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.623832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.623835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.623846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.623858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.623867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.623927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.623933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.623936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.623952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.623960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.623965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.623975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.624036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.624042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.624045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.624057] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.624069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.624079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.624149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.624155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.624160] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.624172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.624184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.624193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.624255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.624260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.624263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.624275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.624287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.624296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.624363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.624368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.624371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.624383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.624395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.624404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.624472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.624478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.624481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.624492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.624504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.624513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.624574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.624579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.624582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.624597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.624609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.624618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.624681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.624687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.624689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.624701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.624713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.624722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.380 [2024-11-20 09:54:32.624790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.380 [2024-11-20 09:54:32.624796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.380 [2024-11-20 09:54:32.624799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.380 [2024-11-20 09:54:32.624810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.380 [2024-11-20 09:54:32.624817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.380 [2024-11-20 09:54:32.624823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.380 [2024-11-20 09:54:32.624832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.381 [2024-11-20 09:54:32.629954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.381 [2024-11-20 09:54:32.629962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.381 [2024-11-20 09:54:32.629965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.381 [2024-11-20 09:54:32.629968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.381 [2024-11-20 09:54:32.629979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:09.381 [2024-11-20 09:54:32.629982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:09.381 [2024-11-20 09:54:32.629985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x613690) 00:22:09.381 [2024-11-20 09:54:32.629991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.381 [2024-11-20 09:54:32.630002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x675580, cid 3, qid 0 00:22:09.381 [2024-11-20 09:54:32.630087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:09.381 [2024-11-20 09:54:32.630092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:09.381 [2024-11-20 09:54:32.630095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:09.381 [2024-11-20 09:54:32.630099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x675580) on tqpair=0x613690 00:22:09.381 [2024-11-20 09:54:32.630107] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:22:09.381 0% 00:22:09.381 Data Units Read: 0 00:22:09.381 Data Units Written: 0 00:22:09.381 Host Read Commands: 0 00:22:09.381 Host Write Commands: 0 00:22:09.381 Controller Busy Time: 0 minutes 00:22:09.381 Power Cycles: 0 00:22:09.381 Power On Hours: 0 hours 00:22:09.381 Unsafe Shutdowns: 0 00:22:09.381 Unrecoverable Media Errors: 0 00:22:09.381 Lifetime Error Log Entries: 0 00:22:09.381 Warning Temperature Time: 0 minutes 00:22:09.381 Critical Temperature Time: 0 minutes 00:22:09.381 00:22:09.381 Number of Queues 00:22:09.381 ================ 00:22:09.381 Number of I/O Submission Queues: 127 00:22:09.381 Number of I/O Completion Queues: 127 00:22:09.381 00:22:09.381 Active Namespaces 00:22:09.381 ================= 00:22:09.381 Namespace ID:1 00:22:09.381 Error Recovery Timeout: Unlimited 00:22:09.381 Command Set Identifier: NVM (00h) 00:22:09.381 Deallocate: Supported 00:22:09.381 Deallocated/Unwritten Error: Not Supported 00:22:09.381 Deallocated Read Value: Unknown 00:22:09.381 Deallocate in Write Zeroes: Not Supported 00:22:09.381 Deallocated Guard Field: 0xFFFF 00:22:09.381 Flush: Supported 00:22:09.381 Reservation: Supported 00:22:09.381 Namespace Sharing Capabilities: Multiple Controllers 00:22:09.381 Size (in LBAs): 131072 (0GiB) 00:22:09.381 Capacity (in LBAs): 131072 (0GiB) 00:22:09.381 Utilization (in LBAs): 131072 (0GiB) 00:22:09.381 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:09.381 EUI64: ABCDEF0123456789 00:22:09.381 UUID: 1b21e466-439b-4960-8fc9-4ae307c300cf 00:22:09.381 Thin Provisioning: Not Supported 00:22:09.381 Per-NS Atomic Units: Yes 00:22:09.381 Atomic Boundary Size (Normal): 0 00:22:09.381 Atomic Boundary Size (PFail): 0 00:22:09.381 Atomic Boundary Offset: 0 00:22:09.381 Maximum Single Source Range Length: 65535 00:22:09.381 Maximum Copy Length: 65535 00:22:09.381 Maximum Source Range Count: 1 00:22:09.381 NGUID/EUI64 Never Reused: No 00:22:09.381 Namespace Write Protected: No 00:22:09.381 Number of LBA Formats: 1 00:22:09.381 Current LBA Format: LBA Format #00 00:22:09.381 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:09.381 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:09.381 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:09.381 rmmod nvme_tcp 00:22:09.381 rmmod nvme_fabrics 00:22:09.381 rmmod nvme_keyring 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2994165 ']' 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2994165 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2994165 ']' 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2994165 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2994165 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2994165' 00:22:09.640 killing process with pid 2994165 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2994165 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2994165 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.640 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.641 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.641 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.641 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.641 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.641 09:54:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:12.174 00:22:12.174 real 0m9.414s 00:22:12.174 user 0m5.753s 00:22:12.174 sys 0m4.925s 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.174 ************************************ 00:22:12.174 END TEST nvmf_identify 00:22:12.174 ************************************ 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.174 ************************************ 00:22:12.174 START TEST nvmf_perf 00:22:12.174 ************************************ 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:12.174 * Looking for test storage... 00:22:12.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # lcov --version 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:22:12.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.174 --rc genhtml_branch_coverage=1 00:22:12.174 --rc genhtml_function_coverage=1 00:22:12.174 --rc genhtml_legend=1 00:22:12.174 --rc geninfo_all_blocks=1 00:22:12.174 --rc geninfo_unexecuted_blocks=1 00:22:12.174 00:22:12.174 ' 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:22:12.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.174 --rc genhtml_branch_coverage=1 00:22:12.174 --rc genhtml_function_coverage=1 00:22:12.174 --rc genhtml_legend=1 00:22:12.174 --rc geninfo_all_blocks=1 00:22:12.174 --rc geninfo_unexecuted_blocks=1 00:22:12.174 00:22:12.174 ' 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:22:12.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.174 --rc genhtml_branch_coverage=1 00:22:12.174 --rc genhtml_function_coverage=1 00:22:12.174 --rc genhtml_legend=1 00:22:12.174 --rc geninfo_all_blocks=1 00:22:12.174 --rc geninfo_unexecuted_blocks=1 00:22:12.174 00:22:12.174 ' 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:22:12.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.174 --rc genhtml_branch_coverage=1 00:22:12.174 --rc genhtml_function_coverage=1 00:22:12.174 --rc genhtml_legend=1 00:22:12.174 --rc geninfo_all_blocks=1 00:22:12.174 --rc geninfo_unexecuted_blocks=1 00:22:12.174 00:22:12.174 ' 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.174 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:12.175 09:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:18.746 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.746 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:18.746 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:18.746 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:18.746 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:18.746 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:18.746 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:18.746 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:18.746 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:18.747 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:18.747 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:18.747 Found net devices under 0000:86:00.0: cvl_0_0 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:18.747 Found net devices under 0000:86:00.1: cvl_0_1 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:18.747 09:54:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:18.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:22:18.747 00:22:18.747 --- 10.0.0.2 ping statistics --- 00:22:18.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.747 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:22:18.747 00:22:18.747 --- 10.0.0.1 ping statistics --- 00:22:18.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.747 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:18.747 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2997896 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2997896 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2997896 ']' 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.748 09:54:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:18.748 [2024-11-20 09:54:41.184526] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:22:18.748 [2024-11-20 09:54:41.184570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.748 [2024-11-20 09:54:41.265643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:18.748 [2024-11-20 09:54:41.308861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.748 [2024-11-20 09:54:41.308900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.748 [2024-11-20 09:54:41.308908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.748 [2024-11-20 09:54:41.308914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.748 [2024-11-20 09:54:41.308920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.748 [2024-11-20 09:54:41.310358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.748 [2024-11-20 09:54:41.310469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.748 [2024-11-20 09:54:41.310576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.748 [2024-11-20 09:54:41.310577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.748 09:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.748 09:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:18.748 09:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.748 09:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.748 09:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:18.748 09:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.748 09:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:18.748 09:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:22.038 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:22.038 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:22.038 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:22.038 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:22.297 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:22.297 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:22.298 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:22.298 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:22.298 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:22.557 [2024-11-20 09:54:45.712583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.557 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.816 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:22.816 09:54:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:23.074 09:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:23.074 09:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:23.074 09:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.333 [2024-11-20 09:54:46.535679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.333 09:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:23.592 09:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:23.592 09:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:23.592 09:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:23.592 09:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:24.970 Initializing NVMe Controllers 00:22:24.970 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:24.970 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:24.970 Initialization complete. Launching workers. 00:22:24.970 ======================================================== 00:22:24.970 Latency(us) 00:22:24.970 Device Information : IOPS MiB/s Average min max 00:22:24.970 PCIE (0000:5e:00.0) NSID 1 from core 0: 96981.30 378.83 329.46 29.58 4386.85 00:22:24.970 ======================================================== 00:22:24.970 Total : 96981.30 378.83 329.46 29.58 4386.85 00:22:24.970 00:22:24.970 09:54:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.349 Initializing NVMe Controllers 00:22:26.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:26.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:26.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:26.349 Initialization complete. Launching workers. 00:22:26.349 ======================================================== 00:22:26.349 Latency(us) 00:22:26.349 Device Information : IOPS MiB/s Average min max 00:22:26.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.70 0.33 12078.47 106.66 45697.12 00:22:26.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.85 0.16 24064.54 5988.95 47886.43 00:22:26.349 ======================================================== 00:22:26.349 Total : 126.55 0.49 16042.37 106.66 47886.43 00:22:26.349 00:22:26.349 09:54:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:27.730 Initializing NVMe Controllers 00:22:27.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:27.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:27.730 Initialization complete. Launching workers. 00:22:27.730 ======================================================== 00:22:27.730 Latency(us) 00:22:27.730 Device Information : IOPS MiB/s Average min max 00:22:27.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10937.56 42.72 2932.14 400.27 10175.69 00:22:27.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3766.85 14.71 8530.05 6885.14 18841.43 00:22:27.730 ======================================================== 00:22:27.730 Total : 14704.41 57.44 4366.16 400.27 18841.43 00:22:27.730 00:22:27.730 09:54:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:27.730 09:54:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:27.730 09:54:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:30.268 Initializing NVMe Controllers 00:22:30.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.268 Controller IO queue size 128, less than required. 00:22:30.268 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.268 Controller IO queue size 128, less than required. 00:22:30.268 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:30.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:30.268 Initialization complete. Launching workers. 00:22:30.268 ======================================================== 00:22:30.268 Latency(us) 00:22:30.268 Device Information : IOPS MiB/s Average min max 00:22:30.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1823.91 455.98 71328.99 52464.51 113314.62 00:22:30.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 548.17 137.04 236298.93 80234.39 360834.39 00:22:30.268 ======================================================== 00:22:30.268 Total : 2372.08 593.02 109452.44 52464.51 360834.39 00:22:30.268 00:22:30.268 09:54:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:30.527 No valid NVMe controllers or AIO or URING devices found 00:22:30.527 Initializing NVMe Controllers 00:22:30.527 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.527 Controller IO queue size 128, less than required. 00:22:30.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.527 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:30.527 Controller IO queue size 128, less than required. 00:22:30.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:30.527 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:30.527 WARNING: Some requested NVMe devices were skipped 00:22:30.527 09:54:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:33.062 Initializing NVMe Controllers 00:22:33.062 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.062 Controller IO queue size 128, less than required. 00:22:33.062 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.062 Controller IO queue size 128, less than required. 00:22:33.062 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:33.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:33.062 Initialization complete. Launching workers. 00:22:33.062 00:22:33.062 ==================== 00:22:33.062 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:33.062 TCP transport: 00:22:33.062 polls: 11496 00:22:33.062 idle_polls: 8161 00:22:33.062 sock_completions: 3335 00:22:33.062 nvme_completions: 5967 00:22:33.062 submitted_requests: 8912 00:22:33.062 queued_requests: 1 00:22:33.062 00:22:33.062 ==================== 00:22:33.062 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:33.062 TCP transport: 00:22:33.062 polls: 11316 00:22:33.062 idle_polls: 7554 00:22:33.062 sock_completions: 3762 00:22:33.062 nvme_completions: 6805 00:22:33.062 submitted_requests: 10336 00:22:33.062 queued_requests: 1 00:22:33.062 ======================================================== 00:22:33.062 Latency(us) 00:22:33.062 Device Information : IOPS MiB/s Average min max 00:22:33.062 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1490.52 372.63 88217.68 64209.91 170974.06 00:22:33.062 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1699.88 424.97 75456.58 42815.47 106719.57 00:22:33.062 ======================================================== 00:22:33.062 Total : 3190.40 797.60 81418.42 42815.47 170974.06 00:22:33.062 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.062 rmmod nvme_tcp 00:22:33.062 rmmod nvme_fabrics 00:22:33.062 rmmod nvme_keyring 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2997896 ']' 00:22:33.062 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2997896 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2997896 ']' 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2997896 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2997896 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2997896' 00:22:33.320 killing process with pid 2997896 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2997896 00:22:33.320 09:54:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2997896 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.697 09:54:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.233 09:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.233 00:22:37.233 real 0m24.870s 00:22:37.233 user 1m6.166s 00:22:37.233 sys 0m8.235s 00:22:37.233 09:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.233 09:54:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:37.233 ************************************ 00:22:37.233 END TEST nvmf_perf 00:22:37.233 ************************************ 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.233 ************************************ 00:22:37.233 START TEST nvmf_fio_host 00:22:37.233 ************************************ 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:37.233 * Looking for test storage... 00:22:37.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # lcov --version 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.233 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:22:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.234 --rc genhtml_branch_coverage=1 00:22:37.234 --rc genhtml_function_coverage=1 00:22:37.234 --rc genhtml_legend=1 00:22:37.234 --rc geninfo_all_blocks=1 00:22:37.234 --rc geninfo_unexecuted_blocks=1 00:22:37.234 00:22:37.234 ' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:22:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.234 --rc genhtml_branch_coverage=1 00:22:37.234 --rc genhtml_function_coverage=1 00:22:37.234 --rc genhtml_legend=1 00:22:37.234 --rc geninfo_all_blocks=1 00:22:37.234 --rc geninfo_unexecuted_blocks=1 00:22:37.234 00:22:37.234 ' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:22:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.234 --rc genhtml_branch_coverage=1 00:22:37.234 --rc genhtml_function_coverage=1 00:22:37.234 --rc genhtml_legend=1 00:22:37.234 --rc geninfo_all_blocks=1 00:22:37.234 --rc geninfo_unexecuted_blocks=1 00:22:37.234 00:22:37.234 ' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:22:37.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.234 --rc genhtml_branch_coverage=1 00:22:37.234 --rc genhtml_function_coverage=1 00:22:37.234 --rc genhtml_legend=1 00:22:37.234 --rc geninfo_all_blocks=1 00:22:37.234 --rc geninfo_unexecuted_blocks=1 00:22:37.234 00:22:37.234 ' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.234 09:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.804 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.804 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.804 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.804 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.804 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.804 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.804 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:43.805 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:43.805 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:43.805 Found net devices under 0000:86:00.0: cvl_0_0 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:43.805 Found net devices under 0000:86:00.1: cvl_0_1 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.805 09:55:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:22:43.805 00:22:43.805 --- 10.0.0.2 ping statistics --- 00:22:43.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.805 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:22:43.805 00:22:43.805 --- 10.0.0.1 ping statistics --- 00:22:43.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.805 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.805 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3004027 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3004027 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3004027 ']' 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.806 [2024-11-20 09:55:06.248554] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:22:43.806 [2024-11-20 09:55:06.248597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.806 [2024-11-20 09:55:06.330978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:43.806 [2024-11-20 09:55:06.371563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.806 [2024-11-20 09:55:06.371607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.806 [2024-11-20 09:55:06.371614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.806 [2024-11-20 09:55:06.371620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.806 [2024-11-20 09:55:06.371625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.806 [2024-11-20 09:55:06.373194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.806 [2024-11-20 09:55:06.373306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.806 [2024-11-20 09:55:06.373412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.806 [2024-11-20 09:55:06.373413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:43.806 [2024-11-20 09:55:06.651396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:43.806 Malloc1 00:22:43.806 09:55:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.806 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:44.065 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.324 [2024-11-20 09:55:07.499433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.324 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:44.582 09:55:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:44.841 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:44.841 fio-3.35 00:22:44.841 Starting 1 thread 00:22:47.377 00:22:47.377 test: (groupid=0, jobs=1): err= 0: pid=3004599: Wed Nov 20 09:55:10 2024 00:22:47.377 read: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(90.9MiB/2005msec) 00:22:47.377 slat (nsec): min=1585, max=241700, avg=1747.14, stdev=2259.46 00:22:47.377 clat (usec): min=3151, max=10739, avg=6096.74, stdev=484.05 00:22:47.377 lat (usec): min=3185, max=10741, avg=6098.49, stdev=483.99 00:22:47.377 clat percentiles (usec): 00:22:47.377 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5735], 00:22:47.377 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:22:47.377 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6849], 00:22:47.377 | 99.00th=[ 7177], 99.50th=[ 7504], 99.90th=[ 9241], 99.95th=[ 9765], 00:22:47.377 | 99.99th=[10683] 00:22:47.377 bw ( KiB/s): min=45712, max=46720, per=99.91%, avg=46408.00, stdev=468.17, samples=4 00:22:47.377 iops : min=11428, max=11680, avg=11602.00, stdev=117.04, samples=4 00:22:47.377 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.3MiB/2005msec); 0 zone resets 00:22:47.377 slat (nsec): min=1620, max=236801, avg=1799.36, stdev=1716.12 00:22:47.377 clat (usec): min=2461, max=9305, avg=4938.12, stdev=393.98 00:22:47.377 lat (usec): min=2477, max=9306, avg=4939.92, stdev=393.99 00:22:47.377 clat percentiles (usec): 00:22:47.377 | 1.00th=[ 4015], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:47.377 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:47.377 | 70.00th=[ 5145], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5538], 00:22:47.377 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 7242], 99.95th=[ 8455], 00:22:47.377 | 99.99th=[ 9241] 00:22:47.378 bw ( KiB/s): min=45656, max=46504, per=100.00%, avg=46130.00, stdev=351.72, samples=4 00:22:47.378 iops : min=11414, max=11626, avg=11532.50, stdev=87.93, samples=4 00:22:47.378 lat (msec) : 4=0.52%, 10=99.47%, 20=0.01% 00:22:47.378 cpu : usr=73.35%, sys=25.60%, ctx=100, majf=0, minf=3 00:22:47.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:47.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:47.378 issued rwts: total=23282,23114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:47.378 00:22:47.378 Run status group 0 (all jobs): 00:22:47.378 READ: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=90.9MiB (95.4MB), run=2005-2005msec 00:22:47.378 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.3MiB (94.7MB), run=2005-2005msec 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:47.378 09:55:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:47.637 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:47.637 fio-3.35 00:22:47.637 Starting 1 thread 00:22:50.173 00:22:50.173 test: (groupid=0, jobs=1): err= 0: pid=3005170: Wed Nov 20 09:55:13 2024 00:22:50.173 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(341MiB/2008msec) 00:22:50.173 slat (nsec): min=2598, max=86663, avg=2833.08, stdev=1175.64 00:22:50.173 clat (usec): min=1344, max=13318, avg=6664.72, stdev=1442.08 00:22:50.173 lat (usec): min=1346, max=13321, avg=6667.56, stdev=1442.15 00:22:50.173 clat percentiles (usec): 00:22:50.173 | 1.00th=[ 3720], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5407], 00:22:50.173 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:22:50.173 | 70.00th=[ 7504], 80.00th=[ 7832], 90.00th=[ 8455], 95.00th=[ 8979], 00:22:50.173 | 99.00th=[10159], 99.50th=[10683], 99.90th=[11469], 99.95th=[11600], 00:22:50.173 | 99.99th=[11863] 00:22:50.173 bw ( KiB/s): min=83968, max=95872, per=50.97%, avg=88752.00, stdev=5065.34, samples=4 00:22:50.173 iops : min= 5248, max= 5992, avg=5547.00, stdev=316.58, samples=4 00:22:50.173 write: IOPS=6407, BW=100MiB/s (105MB/s)(181MiB/1806msec); 0 zone resets 00:22:50.173 slat (usec): min=29, max=324, avg=31.53, stdev= 5.64 00:22:50.173 clat (usec): min=3852, max=15738, avg=8796.31, stdev=1535.58 00:22:50.173 lat (usec): min=3883, max=15769, avg=8827.84, stdev=1536.06 00:22:50.173 clat percentiles (usec): 00:22:50.173 | 1.00th=[ 5669], 5.00th=[ 6587], 10.00th=[ 7046], 20.00th=[ 7504], 00:22:50.173 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:22:50.173 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[10945], 95.00th=[11600], 00:22:50.173 | 99.00th=[12649], 99.50th=[13304], 99.90th=[15270], 99.95th=[15533], 00:22:50.173 | 99.99th=[15664] 00:22:50.173 bw ( KiB/s): min=87904, max=99712, per=90.20%, avg=92472.00, stdev=5061.49, samples=4 00:22:50.173 iops : min= 5494, max= 6232, avg=5779.50, stdev=316.34, samples=4 00:22:50.173 lat (msec) : 2=0.01%, 4=1.50%, 10=90.40%, 20=8.09% 00:22:50.173 cpu : usr=85.80%, sys=13.55%, ctx=43, majf=0, minf=3 00:22:50.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:50.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:50.173 issued rwts: total=21853,11572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:50.173 00:22:50.173 Run status group 0 (all jobs): 00:22:50.173 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=341MiB (358MB), run=2008-2008msec 00:22:50.173 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=181MiB (190MB), run=1806-1806msec 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.173 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.173 rmmod nvme_tcp 00:22:50.173 rmmod nvme_fabrics 00:22:50.173 rmmod nvme_keyring 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3004027 ']' 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3004027 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3004027 ']' 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3004027 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3004027 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3004027' 00:22:50.433 killing process with pid 3004027 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3004027 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3004027 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:50.433 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:50.692 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.692 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.692 09:55:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.599 09:55:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.599 00:22:52.599 real 0m15.786s 00:22:52.599 user 0m46.735s 00:22:52.599 sys 0m6.507s 00:22:52.599 09:55:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.599 09:55:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 ************************************ 00:22:52.599 END TEST nvmf_fio_host 00:22:52.599 ************************************ 00:22:52.599 09:55:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:52.599 09:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:52.599 09:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.599 09:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 ************************************ 00:22:52.599 START TEST nvmf_failover 00:22:52.599 ************************************ 00:22:52.599 09:55:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:52.859 * Looking for test storage... 00:22:52.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.859 09:55:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:22:52.859 09:55:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # lcov --version 00:22:52.859 09:55:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:22:52.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.859 --rc genhtml_branch_coverage=1 00:22:52.859 --rc genhtml_function_coverage=1 00:22:52.859 --rc genhtml_legend=1 00:22:52.859 --rc geninfo_all_blocks=1 00:22:52.859 --rc geninfo_unexecuted_blocks=1 00:22:52.859 00:22:52.859 ' 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:22:52.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.859 --rc genhtml_branch_coverage=1 00:22:52.859 --rc genhtml_function_coverage=1 00:22:52.859 --rc genhtml_legend=1 00:22:52.859 --rc geninfo_all_blocks=1 00:22:52.859 --rc geninfo_unexecuted_blocks=1 00:22:52.859 00:22:52.859 ' 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:22:52.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.859 --rc genhtml_branch_coverage=1 00:22:52.859 --rc genhtml_function_coverage=1 00:22:52.859 --rc genhtml_legend=1 00:22:52.859 --rc geninfo_all_blocks=1 00:22:52.859 --rc geninfo_unexecuted_blocks=1 00:22:52.859 00:22:52.859 ' 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:22:52.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.859 --rc genhtml_branch_coverage=1 00:22:52.859 --rc genhtml_function_coverage=1 00:22:52.859 --rc genhtml_legend=1 00:22:52.859 --rc geninfo_all_blocks=1 00:22:52.859 --rc geninfo_unexecuted_blocks=1 00:22:52.859 00:22:52.859 ' 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.859 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.860 09:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.505 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.505 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:59.506 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:59.506 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:59.506 Found net devices under 0000:86:00.0: cvl_0_0 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:59.506 Found net devices under 0000:86:00.1: cvl_0_1 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.506 09:55:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:22:59.506 00:22:59.506 --- 10.0.0.2 ping statistics --- 00:22:59.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.506 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:59.506 00:22:59.506 --- 10.0.0.1 ping statistics --- 00:22:59.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.506 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.506 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3009001 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3009001 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3009001 ']' 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.507 [2024-11-20 09:55:22.101318] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:22:59.507 [2024-11-20 09:55:22.101360] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.507 [2024-11-20 09:55:22.182341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:59.507 [2024-11-20 09:55:22.225126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.507 [2024-11-20 09:55:22.225166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.507 [2024-11-20 09:55:22.225173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.507 [2024-11-20 09:55:22.225179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.507 [2024-11-20 09:55:22.225185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.507 [2024-11-20 09:55:22.229969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.507 [2024-11-20 09:55:22.230057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.507 [2024-11-20 09:55:22.230057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:59.507 [2024-11-20 09:55:22.538111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:59.507 Malloc0 00:22:59.507 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:59.766 09:55:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:00.025 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.025 [2024-11-20 09:55:23.346625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.284 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:00.284 [2024-11-20 09:55:23.555161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:00.284 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:00.544 [2024-11-20 09:55:23.763844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3009409 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3009409 /var/tmp/bdevperf.sock 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3009409 ']' 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.544 09:55:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.804 09:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.804 09:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:00.804 09:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:01.063 NVMe0n1 00:23:01.322 09:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:01.581 00:23:01.582 09:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3009447 00:23:01.582 09:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.582 09:55:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:02.520 09:55:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.779 [2024-11-20 09:55:25.974431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.779 [2024-11-20 09:55:25.974507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.779 [2024-11-20 09:55:25.974516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.779 [2024-11-20 09:55:25.974523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.779 [2024-11-20 09:55:25.974529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.779 [2024-11-20 09:55:25.974536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.779 [2024-11-20 09:55:25.974542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.780 [2024-11-20 09:55:25.974994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.781 [2024-11-20 09:55:25.975000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.781 [2024-11-20 09:55:25.975006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.781 [2024-11-20 09:55:25.975011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.781 [2024-11-20 09:55:25.975017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6f2d0 is same with the state(6) to be set 00:23:02.781 09:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:06.071 09:55:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:06.331 00:23:06.331 09:55:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:06.331 [2024-11-20 09:55:29.615913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.615970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.615979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.615986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.615992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.615999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.331 [2024-11-20 09:55:29.616214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 [2024-11-20 09:55:29.616318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70060 is same with the state(6) to be set 00:23:06.332 09:55:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:09.623 09:55:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.623 [2024-11-20 09:55:32.829892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.623 09:55:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:10.562 09:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:10.821 [2024-11-20 09:55:34.050590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.821 [2024-11-20 09:55:34.050637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.821 [2024-11-20 09:55:34.050645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.821 [2024-11-20 09:55:34.050652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.821 [2024-11-20 09:55:34.050658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 [2024-11-20 09:55:34.050913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70e30 is same with the state(6) to be set 00:23:10.822 09:55:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3009447 00:23:17.402 { 00:23:17.402 "results": [ 00:23:17.402 { 00:23:17.402 "job": "NVMe0n1", 00:23:17.402 "core_mask": "0x1", 00:23:17.402 "workload": "verify", 00:23:17.402 "status": "finished", 00:23:17.402 "verify_range": { 00:23:17.402 "start": 0, 00:23:17.402 "length": 16384 00:23:17.402 }, 00:23:17.402 "queue_depth": 128, 00:23:17.402 "io_size": 4096, 00:23:17.402 "runtime": 15.006254, 00:23:17.402 "iops": 10760.913416499547, 00:23:17.402 "mibps": 42.034818033201354, 00:23:17.402 "io_failed": 9637, 00:23:17.402 "io_timeout": 0, 00:23:17.402 "avg_latency_us": 11202.867930017272, 00:23:17.402 "min_latency_us": 422.06608695652176, 00:23:17.402 "max_latency_us": 237069.35652173913 00:23:17.402 } 00:23:17.402 ], 00:23:17.402 "core_count": 1 00:23:17.402 } 00:23:17.402 09:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3009409 00:23:17.402 09:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3009409 ']' 00:23:17.402 09:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3009409 00:23:17.402 09:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:17.402 09:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.402 09:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3009409 00:23:17.402 09:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.402 09:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.402 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3009409' 00:23:17.402 killing process with pid 3009409 00:23:17.402 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3009409 00:23:17.402 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3009409 00:23:17.402 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:17.402 [2024-11-20 09:55:23.835113] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:23:17.402 [2024-11-20 09:55:23.835168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3009409 ] 00:23:17.402 [2024-11-20 09:55:23.911687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.402 [2024-11-20 09:55:23.953510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.402 Running I/O for 15 seconds... 00:23:17.402 10977.00 IOPS, 42.88 MiB/s [2024-11-20T08:55:40.734Z] [2024-11-20 09:55:25.976649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.402 [2024-11-20 09:55:25.976885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.402 [2024-11-20 09:55:25.976894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.976901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.976909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.976916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.976924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.976931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.976940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.976954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.976963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.976970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.976979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.976985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.976994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.403 [2024-11-20 09:55:25.977337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.403 [2024-11-20 09:55:25.977500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.403 [2024-11-20 09:55:25.977507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.977990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.977997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.978005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.978012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.978020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.978029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.978037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.978044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.978052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.978058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.978067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.978074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.978082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.978089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.978097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.404 [2024-11-20 09:55:25.978103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.404 [2024-11-20 09:55:25.978111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.405 [2024-11-20 09:55:25.978483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.405 [2024-11-20 09:55:25.978510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97672 len:8 PRP1 0x0 PRP2 0x0 00:23:17.405 [2024-11-20 09:55:25.978517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.405 [2024-11-20 09:55:25.978532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.405 [2024-11-20 09:55:25.978538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97680 len:8 PRP1 0x0 PRP2 0x0 00:23:17.405 [2024-11-20 09:55:25.978544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.405 [2024-11-20 09:55:25.978557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.405 [2024-11-20 09:55:25.978564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97688 len:8 PRP1 0x0 PRP2 0x0 00:23:17.405 [2024-11-20 09:55:25.978570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.405 [2024-11-20 09:55:25.978582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.405 [2024-11-20 09:55:25.978587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97696 len:8 PRP1 0x0 PRP2 0x0 00:23:17.405 [2024-11-20 09:55:25.978594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.405 [2024-11-20 09:55:25.978605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.405 [2024-11-20 09:55:25.978611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97704 len:8 PRP1 0x0 PRP2 0x0 00:23:17.405 [2024-11-20 09:55:25.978619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.405 [2024-11-20 09:55:25.978632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.405 [2024-11-20 09:55:25.978638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97712 len:8 PRP1 0x0 PRP2 0x0 00:23:17.405 [2024-11-20 09:55:25.978645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.405 [2024-11-20 09:55:25.978656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.405 [2024-11-20 09:55:25.978661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97720 len:8 PRP1 0x0 PRP2 0x0 00:23:17.405 [2024-11-20 09:55:25.978668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.405 [2024-11-20 09:55:25.978680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.405 [2024-11-20 09:55:25.978686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97728 len:8 PRP1 0x0 PRP2 0x0 00:23:17.405 [2024-11-20 09:55:25.978692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.405 [2024-11-20 09:55:25.978699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.405 [2024-11-20 09:55:25.978704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.405 [2024-11-20 09:55:25.978709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:23:17.406 [2024-11-20 09:55:25.978715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:25.978722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.406 [2024-11-20 09:55:25.978727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.406 [2024-11-20 09:55:25.978733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:23:17.406 [2024-11-20 09:55:25.978741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:25.978747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.406 [2024-11-20 09:55:25.978752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.406 [2024-11-20 09:55:25.978758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:23:17.406 [2024-11-20 09:55:25.978764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:25.978807] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:17.406 [2024-11-20 09:55:25.978831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.406 [2024-11-20 09:55:26.202790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:26.202848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.406 [2024-11-20 09:55:26.202874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:26.202908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.406 [2024-11-20 09:55:26.202930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:26.202999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.406 [2024-11-20 09:55:26.203036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:26.203072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:17.406 [2024-11-20 09:55:26.203178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cd340 (9): Bad file descriptor 00:23:17.406 [2024-11-20 09:55:26.212558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:17.406 [2024-11-20 09:55:26.251116] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:17.406 9548.00 IOPS, 37.30 MiB/s [2024-11-20T08:55:40.738Z] 10064.33 IOPS, 39.31 MiB/s [2024-11-20T08:55:40.738Z] 10339.25 IOPS, 40.39 MiB/s [2024-11-20T08:55:40.738Z] [2024-11-20 09:55:29.616697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.616985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.616993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.617001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.617009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.617016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.617024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.617031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.617040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.617047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.617055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.406 [2024-11-20 09:55:29.617062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.406 [2024-11-20 09:55:29.617071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.407 [2024-11-20 09:55:29.617472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.407 [2024-11-20 09:55:29.617488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.407 [2024-11-20 09:55:29.617503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.407 [2024-11-20 09:55:29.617518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.407 [2024-11-20 09:55:29.617533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.407 [2024-11-20 09:55:29.617548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.407 [2024-11-20 09:55:29.617563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.407 [2024-11-20 09:55:29.617577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.407 [2024-11-20 09:55:29.617586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.407 [2024-11-20 09:55:29.617592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.408 [2024-11-20 09:55:29.617716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.617912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.617920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.408 [2024-11-20 09:55:29.618385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.408 [2024-11-20 09:55:29.618393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.409 [2024-11-20 09:55:29.618641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13864 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13872 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13880 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13896 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13904 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13912 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13928 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13936 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13944 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.618976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.618982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.618987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:8 PRP1 0x0 PRP2 0x0 00:23:17.409 [2024-11-20 09:55:29.618994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.409 [2024-11-20 09:55:29.619001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.409 [2024-11-20 09:55:29.619006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.409 [2024-11-20 09:55:29.619011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13960 len:8 PRP1 0x0 PRP2 0x0 00:23:17.410 [2024-11-20 09:55:29.619017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:29.629684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.410 [2024-11-20 09:55:29.629699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.410 [2024-11-20 09:55:29.629708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13968 len:8 PRP1 0x0 PRP2 0x0 00:23:17.410 [2024-11-20 09:55:29.629717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:29.629727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.410 [2024-11-20 09:55:29.629736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.410 [2024-11-20 09:55:29.629744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13976 len:8 PRP1 0x0 PRP2 0x0 00:23:17.410 [2024-11-20 09:55:29.629753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:29.629801] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:17.410 [2024-11-20 09:55:29.629831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.410 [2024-11-20 09:55:29.629841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:29.629852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.410 [2024-11-20 09:55:29.629862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:29.629872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.410 [2024-11-20 09:55:29.629882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:29.629892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.410 [2024-11-20 09:55:29.629901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:29.629911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:17.410 [2024-11-20 09:55:29.629956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cd340 (9): Bad file descriptor 00:23:17.410 [2024-11-20 09:55:29.633834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:17.410 [2024-11-20 09:55:29.783458] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:17.410 10106.00 IOPS, 39.48 MiB/s [2024-11-20T08:55:40.742Z] 10274.67 IOPS, 40.14 MiB/s [2024-11-20T08:55:40.742Z] 10415.86 IOPS, 40.69 MiB/s [2024-11-20T08:55:40.742Z] 10519.12 IOPS, 41.09 MiB/s [2024-11-20T08:55:40.742Z] 10599.11 IOPS, 41.40 MiB/s [2024-11-20T08:55:40.742Z] [2024-11-20 09:55:34.052185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.410 [2024-11-20 09:55:34.052558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.410 [2024-11-20 09:55:34.052567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.411 [2024-11-20 09:55:34.052724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.052986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.052994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.053001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.053024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.053030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.053039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.053046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.053054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.053063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.411 [2024-11-20 09:55:34.053072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.411 [2024-11-20 09:55:34.053079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.412 [2024-11-20 09:55:34.053492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.412 [2024-11-20 09:55:34.053699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.412 [2024-11-20 09:55:34.053706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.053985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.053993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.413 [2024-11-20 09:55:34.054000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57520 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57528 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57536 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57544 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57552 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57560 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57568 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57576 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57584 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57592 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57600 len:8 PRP1 0x0 PRP2 0x0 00:23:17.413 [2024-11-20 09:55:34.054293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.413 [2024-11-20 09:55:34.054300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.413 [2024-11-20 09:55:34.054305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.413 [2024-11-20 09:55:34.054311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57608 len:8 PRP1 0x0 PRP2 0x0 00:23:17.414 [2024-11-20 09:55:34.054317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.414 [2024-11-20 09:55:34.054324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.414 [2024-11-20 09:55:34.054329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.414 [2024-11-20 09:55:34.054334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57616 len:8 PRP1 0x0 PRP2 0x0 00:23:17.414 [2024-11-20 09:55:34.054340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.414 [2024-11-20 09:55:34.054346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.414 [2024-11-20 09:55:34.054352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.414 [2024-11-20 09:55:34.054358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57624 len:8 PRP1 0x0 PRP2 0x0 00:23:17.414 [2024-11-20 09:55:34.054364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.414 [2024-11-20 09:55:34.054406] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:17.414 [2024-11-20 09:55:34.054429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.414 [2024-11-20 09:55:34.054437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.414 [2024-11-20 09:55:34.054444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.414 [2024-11-20 09:55:34.054453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.414 [2024-11-20 09:55:34.054460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.414 [2024-11-20 09:55:34.054467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.414 [2024-11-20 09:55:34.054474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.414 [2024-11-20 09:55:34.054480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.414 [2024-11-20 09:55:34.054490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:17.414 [2024-11-20 09:55:34.064890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10cd340 (9): Bad file descriptor 00:23:17.414 [2024-11-20 09:55:34.068788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:17.414 [2024-11-20 09:55:34.095297] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:17.414 10598.90 IOPS, 41.40 MiB/s [2024-11-20T08:55:40.746Z] 10647.27 IOPS, 41.59 MiB/s [2024-11-20T08:55:40.746Z] 10684.75 IOPS, 41.74 MiB/s [2024-11-20T08:55:40.746Z] 10714.54 IOPS, 41.85 MiB/s [2024-11-20T08:55:40.746Z] 10742.07 IOPS, 41.96 MiB/s 00:23:17.414 Latency(us) 00:23:17.414 [2024-11-20T08:55:40.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.414 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:17.414 Verification LBA range: start 0x0 length 0x4000 00:23:17.414 NVMe0n1 : 15.01 10760.91 42.03 642.20 0.00 11202.87 422.07 237069.36 00:23:17.414 [2024-11-20T08:55:40.746Z] =================================================================================================================== 00:23:17.414 [2024-11-20T08:55:40.746Z] Total : 10760.91 42.03 642.20 0.00 11202.87 422.07 237069.36 00:23:17.414 Received shutdown signal, test time was about 15.000000 seconds 00:23:17.414 00:23:17.414 Latency(us) 00:23:17.414 [2024-11-20T08:55:40.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.414 [2024-11-20T08:55:40.746Z] =================================================================================================================== 00:23:17.414 [2024-11-20T08:55:40.746Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3011949 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3011949 /var/tmp/bdevperf.sock 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3011949 ']' 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:17.414 [2024-11-20 09:55:40.609846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:17.414 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:17.674 [2024-11-20 09:55:40.818433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:17.674 09:55:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:17.933 NVMe0n1 00:23:17.933 09:55:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:18.501 00:23:18.501 09:55:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:18.760 00:23:18.760 09:55:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.760 09:55:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:19.020 09:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.020 09:55:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:22.309 09:55:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:22.309 09:55:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:22.309 09:55:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:22.309 09:55:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3012867 00:23:22.309 09:55:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3012867 00:23:23.688 { 00:23:23.688 "results": [ 00:23:23.688 { 00:23:23.688 "job": "NVMe0n1", 00:23:23.688 "core_mask": "0x1", 00:23:23.688 "workload": "verify", 00:23:23.688 "status": "finished", 00:23:23.688 "verify_range": { 00:23:23.688 "start": 0, 00:23:23.688 "length": 16384 00:23:23.688 }, 00:23:23.688 "queue_depth": 128, 00:23:23.688 "io_size": 4096, 00:23:23.688 "runtime": 1.007425, 00:23:23.688 "iops": 10898.081743057795, 00:23:23.688 "mibps": 42.57063180881951, 00:23:23.688 "io_failed": 0, 00:23:23.688 "io_timeout": 0, 00:23:23.688 "avg_latency_us": 11700.203067199436, 00:23:23.688 "min_latency_us": 2379.241739130435, 00:23:23.688 "max_latency_us": 17780.201739130436 00:23:23.688 } 00:23:23.688 ], 00:23:23.688 "core_count": 1 00:23:23.688 } 00:23:23.688 09:55:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:23.688 [2024-11-20 09:55:40.213820] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:23:23.688 [2024-11-20 09:55:40.213873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011949 ] 00:23:23.688 [2024-11-20 09:55:40.291652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.688 [2024-11-20 09:55:40.329915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.688 [2024-11-20 09:55:42.312967] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:23.688 [2024-11-20 09:55:42.313012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.688 [2024-11-20 09:55:42.313023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.688 [2024-11-20 09:55:42.313032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.688 [2024-11-20 09:55:42.313039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.688 [2024-11-20 09:55:42.313047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.688 [2024-11-20 09:55:42.313053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.688 [2024-11-20 09:55:42.313061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.688 [2024-11-20 09:55:42.313067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.688 [2024-11-20 09:55:42.313074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:23.688 [2024-11-20 09:55:42.313099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:23.688 [2024-11-20 09:55:42.313112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4a340 (9): Bad file descriptor 00:23:23.688 [2024-11-20 09:55:42.323863] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:23.688 Running I/O for 1 seconds... 00:23:23.688 10851.00 IOPS, 42.39 MiB/s 00:23:23.688 Latency(us) 00:23:23.688 [2024-11-20T08:55:47.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.688 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:23.688 Verification LBA range: start 0x0 length 0x4000 00:23:23.688 NVMe0n1 : 1.01 10898.08 42.57 0.00 0.00 11700.20 2379.24 17780.20 00:23:23.688 [2024-11-20T08:55:47.020Z] =================================================================================================================== 00:23:23.688 [2024-11-20T08:55:47.020Z] Total : 10898.08 42.57 0.00 0.00 11700.20 2379.24 17780.20 00:23:23.688 09:55:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:23.688 09:55:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:23.688 09:55:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:23.947 09:55:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:23.947 09:55:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:23.947 09:55:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.207 09:55:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3011949 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3011949 ']' 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3011949 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3011949 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3011949' 00:23:27.498 killing process with pid 3011949 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3011949 00:23:27.498 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3011949 00:23:27.757 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:27.757 09:55:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.017 rmmod nvme_tcp 00:23:28.017 rmmod nvme_fabrics 00:23:28.017 rmmod nvme_keyring 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3009001 ']' 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3009001 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3009001 ']' 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3009001 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3009001 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3009001' 00:23:28.017 killing process with pid 3009001 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3009001 00:23:28.017 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3009001 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.277 09:55:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.187 09:55:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.187 00:23:30.187 real 0m37.578s 00:23:30.187 user 1m58.913s 00:23:30.187 sys 0m7.926s 00:23:30.187 09:55:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.187 09:55:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.187 ************************************ 00:23:30.187 END TEST nvmf_failover 00:23:30.187 ************************************ 00:23:30.187 09:55:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:30.187 09:55:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.187 09:55:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.187 09:55:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.447 ************************************ 00:23:30.447 START TEST nvmf_host_discovery 00:23:30.447 ************************************ 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:30.447 * Looking for test storage... 00:23:30.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # lcov --version 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:23:30.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.447 --rc genhtml_branch_coverage=1 00:23:30.447 --rc genhtml_function_coverage=1 00:23:30.447 --rc genhtml_legend=1 00:23:30.447 --rc geninfo_all_blocks=1 00:23:30.447 --rc geninfo_unexecuted_blocks=1 00:23:30.447 00:23:30.447 ' 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:23:30.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.447 --rc genhtml_branch_coverage=1 00:23:30.447 --rc genhtml_function_coverage=1 00:23:30.447 --rc genhtml_legend=1 00:23:30.447 --rc geninfo_all_blocks=1 00:23:30.447 --rc geninfo_unexecuted_blocks=1 00:23:30.447 00:23:30.447 ' 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:23:30.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.447 --rc genhtml_branch_coverage=1 00:23:30.447 --rc genhtml_function_coverage=1 00:23:30.447 --rc genhtml_legend=1 00:23:30.447 --rc geninfo_all_blocks=1 00:23:30.447 --rc geninfo_unexecuted_blocks=1 00:23:30.447 00:23:30.447 ' 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:23:30.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.447 --rc genhtml_branch_coverage=1 00:23:30.447 --rc genhtml_function_coverage=1 00:23:30.447 --rc genhtml_legend=1 00:23:30.447 --rc geninfo_all_blocks=1 00:23:30.447 --rc geninfo_unexecuted_blocks=1 00:23:30.447 00:23:30.447 ' 00:23:30.447 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.448 09:55:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.025 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:37.026 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:37.026 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:37.026 Found net devices under 0000:86:00.0: cvl_0_0 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:37.026 Found net devices under 0000:86:00.1: cvl_0_1 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:23:37.026 00:23:37.026 --- 10.0.0.2 ping statistics --- 00:23:37.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.026 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:23:37.026 00:23:37.026 --- 10.0.0.1 ping statistics --- 00:23:37.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.026 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3017313 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3017313 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3017313 ']' 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.026 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.026 [2024-11-20 09:55:59.748059] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:23:37.026 [2024-11-20 09:55:59.748111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.026 [2024-11-20 09:55:59.826104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.026 [2024-11-20 09:55:59.868005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.026 [2024-11-20 09:55:59.868039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.026 [2024-11-20 09:55:59.868047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.027 [2024-11-20 09:55:59.868053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.027 [2024-11-20 09:55:59.868058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.027 [2024-11-20 09:55:59.868625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.027 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.027 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:37.027 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.027 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.027 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.027 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.027 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.027 09:55:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 [2024-11-20 09:56:00.003982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 [2024-11-20 09:56:00.016171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 null0 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 null1 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3017339 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3017339 /tmp/host.sock 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3017339 ']' 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:37.027 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 [2024-11-20 09:56:00.095596] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:23:37.027 [2024-11-20 09:56:00.095646] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017339 ] 00:23:37.027 [2024-11-20 09:56:00.169424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.027 [2024-11-20 09:56:00.212189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.027 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.288 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.289 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.549 [2024-11-20 09:56:00.633734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:37.549 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:37.550 09:56:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:38.118 [2024-11-20 09:56:01.382101] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:38.118 [2024-11-20 09:56:01.382119] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:38.118 [2024-11-20 09:56:01.382134] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.377 [2024-11-20 09:56:01.470398] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:38.377 [2024-11-20 09:56:01.573165] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:38.377 [2024-11-20 09:56:01.573964] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10cedd0:1 started. 00:23:38.377 [2024-11-20 09:56:01.575343] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:38.377 [2024-11-20 09:56:01.575359] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:38.377 [2024-11-20 09:56:01.580750] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10cedd0 was disconnected and freed. delete nvme_qpair. 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.636 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:38.896 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:38.896 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.896 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.896 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.896 09:56:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:38.896 [2024-11-20 09:56:02.025800] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10cf1a0:1 started. 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.896 [2024-11-20 09:56:02.032126] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10cf1a0 was disconnected and freed. delete nvme_qpair. 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.896 [2024-11-20 09:56:02.125819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:38.896 [2024-11-20 09:56:02.126591] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:38.896 [2024-11-20 09:56:02.126611] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:38.896 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:38.897 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.897 [2024-11-20 09:56:02.212868] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:39.156 09:56:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:39.156 [2024-11-20 09:56:02.317521] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:39.156 [2024-11-20 09:56:02.317554] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:39.156 [2024-11-20 09:56:02.317566] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:39.156 [2024-11-20 09:56:02.317570] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.095 [2024-11-20 09:56:03.377620] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:40.095 [2024-11-20 09:56:03.377646] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.095 [2024-11-20 09:56:03.377722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.095 [2024-11-20 09:56:03.377738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.095 [2024-11-20 09:56:03.377746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.095 [2024-11-20 09:56:03.377752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.095 [2024-11-20 09:56:03.377759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.095 [2024-11-20 09:56:03.377766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.095 [2024-11-20 09:56:03.377773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.095 [2024-11-20 09:56:03.377779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.095 [2024-11-20 09:56:03.377786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.095 [2024-11-20 09:56:03.387734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.095 [2024-11-20 09:56:03.397769] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.095 [2024-11-20 09:56:03.397782] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.095 [2024-11-20 09:56:03.397787] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.095 [2024-11-20 09:56:03.397792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.095 [2024-11-20 09:56:03.397809] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.095 [2024-11-20 09:56:03.398028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.095 [2024-11-20 09:56:03.398045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.095 [2024-11-20 09:56:03.398057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.095 [2024-11-20 09:56:03.398070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.095 [2024-11-20 09:56:03.398082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.095 [2024-11-20 09:56:03.398090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.095 [2024-11-20 09:56:03.398098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.095 [2024-11-20 09:56:03.398105] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.095 [2024-11-20 09:56:03.398110] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.095 [2024-11-20 09:56:03.398114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.095 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.095 [2024-11-20 09:56:03.407841] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.095 [2024-11-20 09:56:03.407852] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.095 [2024-11-20 09:56:03.407856] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.095 [2024-11-20 09:56:03.407860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.095 [2024-11-20 09:56:03.407874] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.095 [2024-11-20 09:56:03.408081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.095 [2024-11-20 09:56:03.408095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.095 [2024-11-20 09:56:03.408103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.095 [2024-11-20 09:56:03.408113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.095 [2024-11-20 09:56:03.408124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.095 [2024-11-20 09:56:03.408130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.095 [2024-11-20 09:56:03.408137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.095 [2024-11-20 09:56:03.408144] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.095 [2024-11-20 09:56:03.408148] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.095 [2024-11-20 09:56:03.408152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.095 [2024-11-20 09:56:03.417905] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.096 [2024-11-20 09:56:03.417916] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.096 [2024-11-20 09:56:03.417920] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.096 [2024-11-20 09:56:03.417924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.096 [2024-11-20 09:56:03.417938] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.096 [2024-11-20 09:56:03.418176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.096 [2024-11-20 09:56:03.418190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.096 [2024-11-20 09:56:03.418197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.096 [2024-11-20 09:56:03.418208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.096 [2024-11-20 09:56:03.418218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.096 [2024-11-20 09:56:03.418225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.096 [2024-11-20 09:56:03.418231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.096 [2024-11-20 09:56:03.418237] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.096 [2024-11-20 09:56:03.418241] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.096 [2024-11-20 09:56:03.418245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.356 [2024-11-20 09:56:03.427971] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.356 [2024-11-20 09:56:03.427985] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.356 [2024-11-20 09:56:03.427989] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.356 [2024-11-20 09:56:03.427993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.356 [2024-11-20 09:56:03.428008] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.356 [2024-11-20 09:56:03.428120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.356 [2024-11-20 09:56:03.428133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.356 [2024-11-20 09:56:03.428141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.356 [2024-11-20 09:56:03.428152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.356 [2024-11-20 09:56:03.428162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.356 [2024-11-20 09:56:03.428168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.356 [2024-11-20 09:56:03.428175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.356 [2024-11-20 09:56:03.428182] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.356 [2024-11-20 09:56:03.428186] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.356 [2024-11-20 09:56:03.428190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.356 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.356 [2024-11-20 09:56:03.438040] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.356 [2024-11-20 09:56:03.438052] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.356 [2024-11-20 09:56:03.438056] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.356 [2024-11-20 09:56:03.438060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.356 [2024-11-20 09:56:03.438075] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.356 [2024-11-20 09:56:03.438301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.356 [2024-11-20 09:56:03.438313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.356 [2024-11-20 09:56:03.438321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.356 [2024-11-20 09:56:03.438334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.356 [2024-11-20 09:56:03.438344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.356 [2024-11-20 09:56:03.438351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.356 [2024-11-20 09:56:03.438358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.356 [2024-11-20 09:56:03.438365] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.356 [2024-11-20 09:56:03.438369] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.356 [2024-11-20 09:56:03.438373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.356 [2024-11-20 09:56:03.448106] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.356 [2024-11-20 09:56:03.448120] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.356 [2024-11-20 09:56:03.448124] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.356 [2024-11-20 09:56:03.448128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.356 [2024-11-20 09:56:03.448143] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.356 [2024-11-20 09:56:03.448299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.356 [2024-11-20 09:56:03.448312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.356 [2024-11-20 09:56:03.448320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.356 [2024-11-20 09:56:03.448340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.356 [2024-11-20 09:56:03.448356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.356 [2024-11-20 09:56:03.448363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.356 [2024-11-20 09:56:03.448370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.356 [2024-11-20 09:56:03.448376] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.356 [2024-11-20 09:56:03.448380] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.356 [2024-11-20 09:56:03.448384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.356 [2024-11-20 09:56:03.458175] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.356 [2024-11-20 09:56:03.458187] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.356 [2024-11-20 09:56:03.458191] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.356 [2024-11-20 09:56:03.458195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.356 [2024-11-20 09:56:03.458209] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.356 [2024-11-20 09:56:03.458314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.356 [2024-11-20 09:56:03.458326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.356 [2024-11-20 09:56:03.458335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.356 [2024-11-20 09:56:03.458345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.356 [2024-11-20 09:56:03.458355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.356 [2024-11-20 09:56:03.458361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.356 [2024-11-20 09:56:03.458368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.356 [2024-11-20 09:56:03.458374] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.356 [2024-11-20 09:56:03.458378] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.356 [2024-11-20 09:56:03.458382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.357 [2024-11-20 09:56:03.468240] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.357 [2024-11-20 09:56:03.468252] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.357 [2024-11-20 09:56:03.468256] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.357 [2024-11-20 09:56:03.468260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.357 [2024-11-20 09:56:03.468274] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.357 [2024-11-20 09:56:03.468386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.357 [2024-11-20 09:56:03.468398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.357 [2024-11-20 09:56:03.468409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.357 [2024-11-20 09:56:03.468419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.357 [2024-11-20 09:56:03.468428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.357 [2024-11-20 09:56:03.468434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.357 [2024-11-20 09:56:03.468441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.357 [2024-11-20 09:56:03.468446] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.357 [2024-11-20 09:56:03.468450] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.357 [2024-11-20 09:56:03.468454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:40.357 [2024-11-20 09:56:03.478305] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.357 [2024-11-20 09:56:03.478317] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.357 [2024-11-20 09:56:03.478321] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.357 [2024-11-20 09:56:03.478325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.357 [2024-11-20 09:56:03.478338] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.357 [2024-11-20 09:56:03.478443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.357 [2024-11-20 09:56:03.478457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.357 [2024-11-20 09:56:03.478464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.357 [2024-11-20 09:56:03.478474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.357 [2024-11-20 09:56:03.478490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.357 [2024-11-20 09:56:03.478497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.357 [2024-11-20 09:56:03.478505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.357 [2024-11-20 09:56:03.478510] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.357 [2024-11-20 09:56:03.478514] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.357 [2024-11-20 09:56:03.478518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:40.357 [2024-11-20 09:56:03.488370] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.357 [2024-11-20 09:56:03.488382] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.357 [2024-11-20 09:56:03.488386] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.357 [2024-11-20 09:56:03.488391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.357 [2024-11-20 09:56:03.488405] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.357 [2024-11-20 09:56:03.488491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.357 [2024-11-20 09:56:03.488503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.357 [2024-11-20 09:56:03.488510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.357 [2024-11-20 09:56:03.488520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.357 [2024-11-20 09:56:03.488537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.357 [2024-11-20 09:56:03.488544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.357 [2024-11-20 09:56:03.488551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.357 [2024-11-20 09:56:03.488557] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.357 [2024-11-20 09:56:03.488562] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.357 [2024-11-20 09:56:03.488566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.357 [2024-11-20 09:56:03.498436] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.357 [2024-11-20 09:56:03.498449] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:40.357 [2024-11-20 09:56:03.498456] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:40.357 [2024-11-20 09:56:03.498460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:40.357 [2024-11-20 09:56:03.498474] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:40.357 [2024-11-20 09:56:03.498764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:40.357 [2024-11-20 09:56:03.498778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x109f390 with addr=10.0.0.2, port=4420 00:23:40.357 [2024-11-20 09:56:03.498790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f390 is same with the state(6) to be set 00:23:40.357 [2024-11-20 09:56:03.498802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109f390 (9): Bad file descriptor 00:23:40.357 [2024-11-20 09:56:03.498825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:40.357 [2024-11-20 09:56:03.498833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:40.357 [2024-11-20 09:56:03.498840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:40.357 [2024-11-20 09:56:03.498846] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:40.357 [2024-11-20 09:56:03.498850] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:40.357 [2024-11-20 09:56:03.498854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:40.357 [2024-11-20 09:56:03.503839] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:40.357 [2024-11-20 09:56:03.503855] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:40.357 09:56:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.295 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.555 09:56:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.935 [2024-11-20 09:56:05.843417] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:42.935 [2024-11-20 09:56:05.843433] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:42.935 [2024-11-20 09:56:05.843444] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:42.935 [2024-11-20 09:56:05.931715] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:42.935 [2024-11-20 09:56:06.076587] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:42.935 [2024-11-20 09:56:06.077109] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x10b0820:1 started. 00:23:42.936 [2024-11-20 09:56:06.078706] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:42.936 [2024-11-20 09:56:06.078730] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.936 [2024-11-20 09:56:06.082352] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x10b0820 was disconnected and freed. delete nvme_qpair. 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.936 request: 00:23:42.936 { 00:23:42.936 "name": "nvme", 00:23:42.936 "trtype": "tcp", 00:23:42.936 "traddr": "10.0.0.2", 00:23:42.936 "adrfam": "ipv4", 00:23:42.936 "trsvcid": "8009", 00:23:42.936 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:42.936 "wait_for_attach": true, 00:23:42.936 "method": "bdev_nvme_start_discovery", 00:23:42.936 "req_id": 1 00:23:42.936 } 00:23:42.936 Got JSON-RPC error response 00:23:42.936 response: 00:23:42.936 { 00:23:42.936 "code": -17, 00:23:42.936 "message": "File exists" 00:23:42.936 } 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.936 request: 00:23:42.936 { 00:23:42.936 "name": "nvme_second", 00:23:42.936 "trtype": "tcp", 00:23:42.936 "traddr": "10.0.0.2", 00:23:42.936 "adrfam": "ipv4", 00:23:42.936 "trsvcid": "8009", 00:23:42.936 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:42.936 "wait_for_attach": true, 00:23:42.936 "method": "bdev_nvme_start_discovery", 00:23:42.936 "req_id": 1 00:23:42.936 } 00:23:42.936 Got JSON-RPC error response 00:23:42.936 response: 00:23:42.936 { 00:23:42.936 "code": -17, 00:23:42.936 "message": "File exists" 00:23:42.936 } 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:42.936 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.196 09:56:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.135 [2024-11-20 09:56:07.322465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.135 [2024-11-20 09:56:07.322492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b3790 with addr=10.0.0.2, port=8010 00:23:44.135 [2024-11-20 09:56:07.322506] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:44.135 [2024-11-20 09:56:07.322513] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:44.135 [2024-11-20 09:56:07.322519] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:45.074 [2024-11-20 09:56:08.324909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.074 [2024-11-20 09:56:08.324934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b3790 with addr=10.0.0.2, port=8010 00:23:45.074 [2024-11-20 09:56:08.324945] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:45.074 [2024-11-20 09:56:08.324956] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:45.074 [2024-11-20 09:56:08.324962] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:46.012 [2024-11-20 09:56:09.327092] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:46.012 request: 00:23:46.012 { 00:23:46.012 "name": "nvme_second", 00:23:46.012 "trtype": "tcp", 00:23:46.012 "traddr": "10.0.0.2", 00:23:46.012 "adrfam": "ipv4", 00:23:46.012 "trsvcid": "8010", 00:23:46.012 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:46.012 "wait_for_attach": false, 00:23:46.012 "attach_timeout_ms": 3000, 00:23:46.012 "method": "bdev_nvme_start_discovery", 00:23:46.012 "req_id": 1 00:23:46.012 } 00:23:46.012 Got JSON-RPC error response 00:23:46.012 response: 00:23:46.012 { 00:23:46.012 "code": -110, 00:23:46.012 "message": "Connection timed out" 00:23:46.012 } 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.012 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3017339 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.271 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.272 rmmod nvme_tcp 00:23:46.272 rmmod nvme_fabrics 00:23:46.272 rmmod nvme_keyring 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3017313 ']' 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3017313 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3017313 ']' 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3017313 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3017313 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3017313' 00:23:46.272 killing process with pid 3017313 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3017313 00:23:46.272 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3017313 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.532 09:56:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.439 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:48.439 00:23:48.439 real 0m18.169s 00:23:48.439 user 0m22.383s 00:23:48.439 sys 0m5.957s 00:23:48.439 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.439 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.439 ************************************ 00:23:48.439 END TEST nvmf_host_discovery 00:23:48.439 ************************************ 00:23:48.439 09:56:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:48.439 09:56:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.439 09:56:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.439 09:56:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.699 ************************************ 00:23:48.699 START TEST nvmf_host_multipath_status 00:23:48.699 ************************************ 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:48.699 * Looking for test storage... 00:23:48.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # lcov --version 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:48.699 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:23:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.700 --rc genhtml_branch_coverage=1 00:23:48.700 --rc genhtml_function_coverage=1 00:23:48.700 --rc genhtml_legend=1 00:23:48.700 --rc geninfo_all_blocks=1 00:23:48.700 --rc geninfo_unexecuted_blocks=1 00:23:48.700 00:23:48.700 ' 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:23:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.700 --rc genhtml_branch_coverage=1 00:23:48.700 --rc genhtml_function_coverage=1 00:23:48.700 --rc genhtml_legend=1 00:23:48.700 --rc geninfo_all_blocks=1 00:23:48.700 --rc geninfo_unexecuted_blocks=1 00:23:48.700 00:23:48.700 ' 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:23:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.700 --rc genhtml_branch_coverage=1 00:23:48.700 --rc genhtml_function_coverage=1 00:23:48.700 --rc genhtml_legend=1 00:23:48.700 --rc geninfo_all_blocks=1 00:23:48.700 --rc geninfo_unexecuted_blocks=1 00:23:48.700 00:23:48.700 ' 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:23:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.700 --rc genhtml_branch_coverage=1 00:23:48.700 --rc genhtml_function_coverage=1 00:23:48.700 --rc genhtml_legend=1 00:23:48.700 --rc geninfo_all_blocks=1 00:23:48.700 --rc geninfo_unexecuted_blocks=1 00:23:48.700 00:23:48.700 ' 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.700 09:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.700 09:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:55.392 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:55.392 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:55.392 Found net devices under 0000:86:00.0: cvl_0_0 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:55.392 Found net devices under 0000:86:00.1: cvl_0_1 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.392 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:23:55.393 00:23:55.393 --- 10.0.0.2 ping statistics --- 00:23:55.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.393 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:23:55.393 00:23:55.393 --- 10.0.0.1 ping statistics --- 00:23:55.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.393 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3022580 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3022580 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3022580 ']' 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.393 09:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.393 [2024-11-20 09:56:17.931009] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:23:55.393 [2024-11-20 09:56:17.931059] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.393 [2024-11-20 09:56:18.011872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:55.393 [2024-11-20 09:56:18.053389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.393 [2024-11-20 09:56:18.053427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.393 [2024-11-20 09:56:18.053434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.393 [2024-11-20 09:56:18.053440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.393 [2024-11-20 09:56:18.053446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.393 [2024-11-20 09:56:18.054619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.393 [2024-11-20 09:56:18.054621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3022580 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:55.393 [2024-11-20 09:56:18.351799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:55.393 Malloc0 00:23:55.393 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:55.653 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.912 09:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.912 [2024-11-20 09:56:19.168854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.912 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:56.171 [2024-11-20 09:56:19.377367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3022888 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3022888 /var/tmp/bdevperf.sock 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3022888 ']' 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.171 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:56.430 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.430 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:56.431 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:56.690 09:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:57.257 Nvme0n1 00:23:57.257 09:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:57.515 Nvme0n1 00:23:57.515 09:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:57.515 09:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:00.050 09:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:00.050 09:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:00.050 09:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:00.050 09:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:00.985 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:00.985 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:00.985 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.985 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.244 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.244 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:01.244 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.244 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.503 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.503 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.503 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.503 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.764 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.764 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.764 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.764 09:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:02.022 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.022 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:02.022 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.022 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:02.022 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.022 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:02.022 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.022 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:02.281 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.281 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:02.281 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.539 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:02.798 09:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:03.737 09:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:03.737 09:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:03.737 09:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.737 09:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.996 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.996 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:03.996 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.996 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.254 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.254 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.254 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.254 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.254 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.254 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.254 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.254 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.513 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.513 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.513 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.513 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.772 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.772 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.772 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.772 09:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.030 09:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.030 09:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:05.030 09:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:05.288 09:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:05.288 09:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:06.661 09:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:06.661 09:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:06.661 09:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.661 09:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.661 09:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.661 09:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:06.661 09:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.661 09:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.919 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.919 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.919 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.919 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:07.176 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.176 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:07.176 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.176 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:07.176 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.176 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:07.176 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.176 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.435 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.435 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.435 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.435 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.693 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.693 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:07.694 09:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.953 09:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:08.210 09:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:09.145 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:09.145 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:09.145 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.146 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:09.404 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.404 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:09.404 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.404 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.663 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:09.663 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.663 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.663 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.663 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.663 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.663 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.663 09:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.922 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.922 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.922 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.922 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:10.181 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.181 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:10.181 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.181 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.440 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.440 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:10.440 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:10.699 09:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:10.699 09:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:12.077 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:12.077 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:12.077 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.077 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:12.077 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.077 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:12.077 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.077 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:12.336 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.336 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:12.336 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.336 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:12.336 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.336 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:12.336 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.336 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:12.595 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.595 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:12.595 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.595 09:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.854 09:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.854 09:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:12.854 09:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.854 09:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:13.113 09:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.113 09:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:13.113 09:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:13.373 09:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:13.373 09:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:14.750 09:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:14.750 09:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:14.750 09:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.750 09:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:14.750 09:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.750 09:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:14.750 09:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.750 09:56:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.750 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.750 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.750 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.750 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:15.009 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.009 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:15.009 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.009 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:15.268 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.268 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:15.268 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:15.268 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.527 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:15.527 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:15.528 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.528 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:15.787 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.787 09:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:15.788 09:56:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:15.788 09:56:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:16.047 09:56:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:16.307 09:56:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:17.243 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:17.243 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:17.243 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.243 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:17.501 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.501 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:17.501 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.501 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:17.760 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.760 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:17.760 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.760 09:56:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:18.019 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.019 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:18.019 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.019 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:18.019 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.019 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:18.279 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.279 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:18.279 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.279 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:18.280 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.280 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:18.539 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.539 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:18.539 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.798 09:56:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:19.057 09:56:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:19.991 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:19.991 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:19.991 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.991 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:20.249 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:20.249 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:20.249 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.249 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:20.508 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.508 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:20.508 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.508 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.508 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.508 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.508 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.508 09:56:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.768 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.768 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.768 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.768 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:21.025 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.025 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:21.025 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.025 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:21.285 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.285 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:21.285 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:21.570 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:21.570 09:56:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:22.945 09:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:22.945 09:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:22.945 09:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.945 09:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.945 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.945 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:22.945 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.945 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:23.203 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.203 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:23.203 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.203 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:23.203 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.203 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.203 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.203 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:23.462 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.462 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:23.462 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.462 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.721 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.721 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:23.721 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.721 09:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:23.980 09:56:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.980 09:56:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:23.980 09:56:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:24.241 09:56:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:24.499 09:56:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:25.434 09:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:25.434 09:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:25.434 09:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.434 09:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:25.695 09:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.695 09:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:25.695 09:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.695 09:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:25.695 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.695 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:25.695 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.695 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:25.954 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.954 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:25.954 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.954 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:26.212 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.212 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:26.212 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.212 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:26.473 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.473 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:26.473 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.473 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3022888 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3022888 ']' 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3022888 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3022888 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3022888' 00:24:26.731 killing process with pid 3022888 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3022888 00:24:26.731 09:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3022888 00:24:26.731 { 00:24:26.731 "results": [ 00:24:26.731 { 00:24:26.731 "job": "Nvme0n1", 00:24:26.731 "core_mask": "0x4", 00:24:26.731 "workload": "verify", 00:24:26.731 "status": "terminated", 00:24:26.731 "verify_range": { 00:24:26.731 "start": 0, 00:24:26.731 "length": 16384 00:24:26.731 }, 00:24:26.731 "queue_depth": 128, 00:24:26.731 "io_size": 4096, 00:24:26.731 "runtime": 28.96469, 00:24:26.731 "iops": 10439.642198828988, 00:24:26.731 "mibps": 40.77985233917573, 00:24:26.731 "io_failed": 0, 00:24:26.731 "io_timeout": 0, 00:24:26.731 "avg_latency_us": 12241.121605414879, 00:24:26.731 "min_latency_us": 637.5513043478261, 00:24:26.731 "max_latency_us": 3019898.88 00:24:26.731 } 00:24:26.731 ], 00:24:26.731 "core_count": 1 00:24:26.731 } 00:24:26.992 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3022888 00:24:26.992 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:26.992 [2024-11-20 09:56:19.453977] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:24:26.992 [2024-11-20 09:56:19.454034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022888 ] 00:24:26.992 [2024-11-20 09:56:19.529589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.992 [2024-11-20 09:56:19.570616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.992 Running I/O for 90 seconds... 00:24:26.992 11192.00 IOPS, 43.72 MiB/s [2024-11-20T08:56:50.324Z] 11219.00 IOPS, 43.82 MiB/s [2024-11-20T08:56:50.324Z] 11235.00 IOPS, 43.89 MiB/s [2024-11-20T08:56:50.324Z] 11246.75 IOPS, 43.93 MiB/s [2024-11-20T08:56:50.324Z] 11245.20 IOPS, 43.93 MiB/s [2024-11-20T08:56:50.324Z] 11240.67 IOPS, 43.91 MiB/s [2024-11-20T08:56:50.324Z] 11261.29 IOPS, 43.99 MiB/s [2024-11-20T08:56:50.324Z] 11250.00 IOPS, 43.95 MiB/s [2024-11-20T08:56:50.324Z] 11245.89 IOPS, 43.93 MiB/s [2024-11-20T08:56:50.324Z] 11220.20 IOPS, 43.83 MiB/s [2024-11-20T08:56:50.324Z] 11217.09 IOPS, 43.82 MiB/s [2024-11-20T08:56:50.324Z] 11216.83 IOPS, 43.82 MiB/s [2024-11-20T08:56:50.324Z] [2024-11-20 09:56:33.813385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.813924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.813931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:26.992 [2024-11-20 09:56:33.814612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.992 [2024-11-20 09:56:33.814620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.814992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.814999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-11-20 09:56:33.815574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-11-20 09:56:33.815598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-11-20 09:56:33.815621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-11-20 09:56:33.815644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-11-20 09:56:33.815668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-11-20 09:56:33.815691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.993 [2024-11-20 09:56:33.815715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.993 [2024-11-20 09:56:33.815874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.993 [2024-11-20 09:56:33.815882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.815898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.815905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.815921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.815928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.815944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.815957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.815975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.815983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:33.816688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-11-20 09:56:33.816714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-11-20 09:56:33.816740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-11-20 09:56:33.816764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-11-20 09:56:33.816789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-11-20 09:56:33.816814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-11-20 09:56:33.816839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-11-20 09:56:33.816864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:33.816882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.994 [2024-11-20 09:56:33.816889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:26.994 11100.31 IOPS, 43.36 MiB/s [2024-11-20T08:56:50.326Z] 10307.43 IOPS, 40.26 MiB/s [2024-11-20T08:56:50.326Z] 9620.27 IOPS, 37.58 MiB/s [2024-11-20T08:56:50.326Z] 9107.88 IOPS, 35.58 MiB/s [2024-11-20T08:56:50.326Z] 9236.94 IOPS, 36.08 MiB/s [2024-11-20T08:56:50.326Z] 9347.89 IOPS, 36.52 MiB/s [2024-11-20T08:56:50.326Z] 9519.53 IOPS, 37.19 MiB/s [2024-11-20T08:56:50.326Z] 9714.60 IOPS, 37.95 MiB/s [2024-11-20T08:56:50.326Z] 9886.67 IOPS, 38.62 MiB/s [2024-11-20T08:56:50.326Z] 9944.64 IOPS, 38.85 MiB/s [2024-11-20T08:56:50.326Z] 9998.96 IOPS, 39.06 MiB/s [2024-11-20T08:56:50.326Z] 10053.50 IOPS, 39.27 MiB/s [2024-11-20T08:56:50.326Z] 10180.16 IOPS, 39.77 MiB/s [2024-11-20T08:56:50.326Z] 10306.54 IOPS, 40.26 MiB/s [2024-11-20T08:56:50.326Z] [2024-11-20 09:56:47.554348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.554657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.554665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.555919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.555940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.555965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.555973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.555986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.555997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.556016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.556036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.556055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.556074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.556093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.556112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.556141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.556160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.994 [2024-11-20 09:56:47.556178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:26.994 [2024-11-20 09:56:47.556190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.995 [2024-11-20 09:56:47.556272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.995 [2024-11-20 09:56:47.556291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.995 [2024-11-20 09:56:47.556310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.995 [2024-11-20 09:56:47.556329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:26.995 [2024-11-20 09:56:47.556475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.995 [2024-11-20 09:56:47.556482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:26.995 10390.74 IOPS, 40.59 MiB/s [2024-11-20T08:56:50.327Z] 10420.71 IOPS, 40.71 MiB/s [2024-11-20T08:56:50.327Z] Received shutdown signal, test time was about 28.965338 seconds 00:24:26.995 00:24:26.995 Latency(us) 00:24:26.995 [2024-11-20T08:56:50.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.995 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:26.995 Verification LBA range: start 0x0 length 0x4000 00:24:26.995 Nvme0n1 : 28.96 10439.64 40.78 0.00 0.00 12241.12 637.55 3019898.88 00:24:26.995 [2024-11-20T08:56:50.327Z] =================================================================================================================== 00:24:26.995 [2024-11-20T08:56:50.327Z] Total : 10439.64 40.78 0.00 0.00 12241.12 637.55 3019898.88 00:24:26.995 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.995 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:26.995 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:26.995 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:26.995 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:26.995 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:26.995 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:26.995 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.253 rmmod nvme_tcp 00:24:27.253 rmmod nvme_fabrics 00:24:27.253 rmmod nvme_keyring 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3022580 ']' 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3022580 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3022580 ']' 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3022580 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3022580 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3022580' 00:24:27.253 killing process with pid 3022580 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3022580 00:24:27.253 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3022580 00:24:27.512 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:27.512 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:27.512 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:27.512 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:27.513 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:27.513 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:27.513 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:27.513 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.513 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.513 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.513 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.513 09:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.418 09:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:29.418 00:24:29.418 real 0m40.879s 00:24:29.418 user 1m51.161s 00:24:29.418 sys 0m11.554s 00:24:29.418 09:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.418 09:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:29.418 ************************************ 00:24:29.418 END TEST nvmf_host_multipath_status 00:24:29.418 ************************************ 00:24:29.418 09:56:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:29.418 09:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:29.418 09:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.418 09:56:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.678 ************************************ 00:24:29.678 START TEST nvmf_discovery_remove_ifc 00:24:29.678 ************************************ 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:29.678 * Looking for test storage... 00:24:29.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # lcov --version 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:29.678 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:24:29.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.679 --rc genhtml_branch_coverage=1 00:24:29.679 --rc genhtml_function_coverage=1 00:24:29.679 --rc genhtml_legend=1 00:24:29.679 --rc geninfo_all_blocks=1 00:24:29.679 --rc geninfo_unexecuted_blocks=1 00:24:29.679 00:24:29.679 ' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:24:29.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.679 --rc genhtml_branch_coverage=1 00:24:29.679 --rc genhtml_function_coverage=1 00:24:29.679 --rc genhtml_legend=1 00:24:29.679 --rc geninfo_all_blocks=1 00:24:29.679 --rc geninfo_unexecuted_blocks=1 00:24:29.679 00:24:29.679 ' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:24:29.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.679 --rc genhtml_branch_coverage=1 00:24:29.679 --rc genhtml_function_coverage=1 00:24:29.679 --rc genhtml_legend=1 00:24:29.679 --rc geninfo_all_blocks=1 00:24:29.679 --rc geninfo_unexecuted_blocks=1 00:24:29.679 00:24:29.679 ' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:24:29.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.679 --rc genhtml_branch_coverage=1 00:24:29.679 --rc genhtml_function_coverage=1 00:24:29.679 --rc genhtml_legend=1 00:24:29.679 --rc geninfo_all_blocks=1 00:24:29.679 --rc geninfo_unexecuted_blocks=1 00:24:29.679 00:24:29.679 ' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:29.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:29.679 09:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:36.253 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:36.253 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.253 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:36.254 Found net devices under 0000:86:00.0: cvl_0_0 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:36.254 Found net devices under 0000:86:00.1: cvl_0_1 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:36.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:24:36.254 00:24:36.254 --- 10.0.0.2 ping statistics --- 00:24:36.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.254 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:24:36.254 00:24:36.254 --- 10.0.0.1 ping statistics --- 00:24:36.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.254 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3031460 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3031460 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3031460 ']' 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.254 09:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.254 [2024-11-20 09:56:58.909481] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:24:36.254 [2024-11-20 09:56:58.909525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.254 [2024-11-20 09:56:58.989205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.254 [2024-11-20 09:56:59.030379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.254 [2024-11-20 09:56:59.030414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.254 [2024-11-20 09:56:59.030423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.254 [2024-11-20 09:56:59.030431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.254 [2024-11-20 09:56:59.030438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.254 [2024-11-20 09:56:59.031061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.254 [2024-11-20 09:56:59.174390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.254 [2024-11-20 09:56:59.182563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:36.254 null0 00:24:36.254 [2024-11-20 09:56:59.214547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3031527 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3031527 /tmp/host.sock 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3031527 ']' 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:36.254 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:36.255 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.255 [2024-11-20 09:56:59.282547] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:24:36.255 [2024-11-20 09:56:59.282589] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031527 ] 00:24:36.255 [2024-11-20 09:56:59.355120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.255 [2024-11-20 09:56:59.397836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.255 09:56:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.635 [2024-11-20 09:57:00.582467] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:37.635 [2024-11-20 09:57:00.582492] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:37.635 [2024-11-20 09:57:00.582507] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:37.635 [2024-11-20 09:57:00.708900] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:37.635 [2024-11-20 09:57:00.890867] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:37.635 [2024-11-20 09:57:00.891665] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6479f0:1 started. 00:24:37.635 [2024-11-20 09:57:00.893035] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:37.635 [2024-11-20 09:57:00.893073] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:37.635 [2024-11-20 09:57:00.893093] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:37.635 [2024-11-20 09:57:00.893107] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:37.635 [2024-11-20 09:57:00.893126] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.635 [2024-11-20 09:57:00.940536] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6479f0 was disconnected and freed. delete nvme_qpair. 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:37.635 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:37.636 09:57:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:37.896 09:57:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:38.909 09:57:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:39.875 09:57:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.251 09:57:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:42.185 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.185 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.185 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.186 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.186 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.186 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.186 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.186 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.186 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:42.186 09:57:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.123 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.123 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.123 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.123 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.123 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.123 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.123 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.123 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.123 [2024-11-20 09:57:06.334770] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:43.123 [2024-11-20 09:57:06.334813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.123 [2024-11-20 09:57:06.334824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.123 [2024-11-20 09:57:06.334833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.123 [2024-11-20 09:57:06.334841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.123 [2024-11-20 09:57:06.334848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.123 [2024-11-20 09:57:06.334855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.123 [2024-11-20 09:57:06.334863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.123 [2024-11-20 09:57:06.334870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.123 [2024-11-20 09:57:06.334877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:43.123 [2024-11-20 09:57:06.334884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.123 [2024-11-20 09:57:06.334892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x624220 is same with the state(6) to be set 00:24:43.124 [2024-11-20 09:57:06.344793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x624220 (9): Bad file descriptor 00:24:43.124 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:43.124 09:57:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.124 [2024-11-20 09:57:06.354828] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:43.124 [2024-11-20 09:57:06.354842] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:43.124 [2024-11-20 09:57:06.354847] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:43.124 [2024-11-20 09:57:06.354852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:43.124 [2024-11-20 09:57:06.354874] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:44.062 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.062 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.062 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.062 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.062 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.062 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.062 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.062 [2024-11-20 09:57:07.379962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:44.062 [2024-11-20 09:57:07.380007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x624220 with addr=10.0.0.2, port=4420 00:24:44.062 [2024-11-20 09:57:07.380024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x624220 is same with the state(6) to be set 00:24:44.062 [2024-11-20 09:57:07.380054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x624220 (9): Bad file descriptor 00:24:44.062 [2024-11-20 09:57:07.380466] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:44.062 [2024-11-20 09:57:07.380495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:44.062 [2024-11-20 09:57:07.380505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:44.062 [2024-11-20 09:57:07.380515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:44.062 [2024-11-20 09:57:07.380524] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:44.062 [2024-11-20 09:57:07.380531] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:44.062 [2024-11-20 09:57:07.380536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:44.062 [2024-11-20 09:57:07.380546] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:44.062 [2024-11-20 09:57:07.380553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:44.322 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.322 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.322 09:57:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.260 [2024-11-20 09:57:08.383033] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:45.260 [2024-11-20 09:57:08.383055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:45.260 [2024-11-20 09:57:08.383067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:45.260 [2024-11-20 09:57:08.383074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:45.260 [2024-11-20 09:57:08.383082] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:45.260 [2024-11-20 09:57:08.383088] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:45.260 [2024-11-20 09:57:08.383093] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:45.260 [2024-11-20 09:57:08.383097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:45.260 [2024-11-20 09:57:08.383118] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:45.260 [2024-11-20 09:57:08.383141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.260 [2024-11-20 09:57:08.383151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.260 [2024-11-20 09:57:08.383163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.260 [2024-11-20 09:57:08.383170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.260 [2024-11-20 09:57:08.383178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.260 [2024-11-20 09:57:08.383185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.260 [2024-11-20 09:57:08.383192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.260 [2024-11-20 09:57:08.383203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.260 [2024-11-20 09:57:08.383211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.260 [2024-11-20 09:57:08.383218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.260 [2024-11-20 09:57:08.383225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:45.260 [2024-11-20 09:57:08.383271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613900 (9): Bad file descriptor 00:24:45.260 [2024-11-20 09:57:08.384295] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:45.260 [2024-11-20 09:57:08.384306] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:45.260 09:57:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:46.637 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:46.637 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:46.637 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:46.637 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.638 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:46.638 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.638 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:46.638 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.638 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:46.638 09:57:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.206 [2024-11-20 09:57:10.437108] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:47.206 [2024-11-20 09:57:10.437130] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:47.206 [2024-11-20 09:57:10.437143] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:47.206 [2024-11-20 09:57:10.524406] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:47.465 [2024-11-20 09:57:10.627107] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:47.465 [2024-11-20 09:57:10.627760] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61efd0:1 started. 00:24:47.465 [2024-11-20 09:57:10.628805] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:47.465 [2024-11-20 09:57:10.628836] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:47.465 [2024-11-20 09:57:10.628853] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:47.465 [2024-11-20 09:57:10.628865] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:47.465 [2024-11-20 09:57:10.628872] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:47.465 [2024-11-20 09:57:10.635129] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61efd0 was disconnected and freed. delete nvme_qpair. 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3031527 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3031527 ']' 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3031527 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031527 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031527' 00:24:47.465 killing process with pid 3031527 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3031527 00:24:47.465 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3031527 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.724 rmmod nvme_tcp 00:24:47.724 rmmod nvme_fabrics 00:24:47.724 rmmod nvme_keyring 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3031460 ']' 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3031460 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3031460 ']' 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3031460 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.724 09:57:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031460 00:24:47.724 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:47.724 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:47.724 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031460' 00:24:47.724 killing process with pid 3031460 00:24:47.724 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3031460 00:24:47.724 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3031460 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.983 09:57:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.516 09:57:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:50.516 00:24:50.516 real 0m20.482s 00:24:50.516 user 0m24.717s 00:24:50.516 sys 0m5.906s 00:24:50.516 09:57:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.516 09:57:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.516 ************************************ 00:24:50.516 END TEST nvmf_discovery_remove_ifc 00:24:50.516 ************************************ 00:24:50.516 09:57:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:50.516 09:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:50.516 09:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.516 09:57:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.516 ************************************ 00:24:50.516 START TEST nvmf_identify_kernel_target 00:24:50.517 ************************************ 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:50.517 * Looking for test storage... 00:24:50.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # lcov --version 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:24:50.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.517 --rc genhtml_branch_coverage=1 00:24:50.517 --rc genhtml_function_coverage=1 00:24:50.517 --rc genhtml_legend=1 00:24:50.517 --rc geninfo_all_blocks=1 00:24:50.517 --rc geninfo_unexecuted_blocks=1 00:24:50.517 00:24:50.517 ' 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:24:50.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.517 --rc genhtml_branch_coverage=1 00:24:50.517 --rc genhtml_function_coverage=1 00:24:50.517 --rc genhtml_legend=1 00:24:50.517 --rc geninfo_all_blocks=1 00:24:50.517 --rc geninfo_unexecuted_blocks=1 00:24:50.517 00:24:50.517 ' 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:24:50.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.517 --rc genhtml_branch_coverage=1 00:24:50.517 --rc genhtml_function_coverage=1 00:24:50.517 --rc genhtml_legend=1 00:24:50.517 --rc geninfo_all_blocks=1 00:24:50.517 --rc geninfo_unexecuted_blocks=1 00:24:50.517 00:24:50.517 ' 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:24:50.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.517 --rc genhtml_branch_coverage=1 00:24:50.517 --rc genhtml_function_coverage=1 00:24:50.517 --rc genhtml_legend=1 00:24:50.517 --rc geninfo_all_blocks=1 00:24:50.517 --rc geninfo_unexecuted_blocks=1 00:24:50.517 00:24:50.517 ' 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.517 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:50.518 09:57:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:57.089 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:57.090 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:57.090 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:57.090 Found net devices under 0000:86:00.0: cvl_0_0 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:57.090 Found net devices under 0000:86:00.1: cvl_0_1 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.090 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:24:57.090 00:24:57.090 --- 10.0.0.2 ping statistics --- 00:24:57.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.091 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:24:57.091 00:24:57.091 --- 10.0.0.1 ping statistics --- 00:24:57.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.091 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:57.091 09:57:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:59.001 Waiting for block devices as requested 00:24:59.001 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:59.261 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:59.261 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:59.261 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:59.520 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:59.520 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:59.520 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:59.520 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:59.779 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:59.779 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:59.779 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:00.038 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:00.038 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:00.038 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:00.038 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:00.297 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:00.297 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:00.297 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:00.297 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:00.297 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:00.297 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:00.297 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:00.297 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:00.297 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:00.297 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:00.297 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:00.557 No valid GPT data, bailing 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:00.557 00:25:00.557 Discovery Log Number of Records 2, Generation counter 2 00:25:00.557 =====Discovery Log Entry 0====== 00:25:00.557 trtype: tcp 00:25:00.557 adrfam: ipv4 00:25:00.557 subtype: current discovery subsystem 00:25:00.557 treq: not specified, sq flow control disable supported 00:25:00.557 portid: 1 00:25:00.557 trsvcid: 4420 00:25:00.557 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:00.557 traddr: 10.0.0.1 00:25:00.557 eflags: none 00:25:00.557 sectype: none 00:25:00.557 =====Discovery Log Entry 1====== 00:25:00.557 trtype: tcp 00:25:00.557 adrfam: ipv4 00:25:00.557 subtype: nvme subsystem 00:25:00.557 treq: not specified, sq flow control disable supported 00:25:00.557 portid: 1 00:25:00.557 trsvcid: 4420 00:25:00.557 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:00.557 traddr: 10.0.0.1 00:25:00.557 eflags: none 00:25:00.557 sectype: none 00:25:00.557 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:00.557 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:00.557 ===================================================== 00:25:00.557 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:00.557 ===================================================== 00:25:00.557 Controller Capabilities/Features 00:25:00.557 ================================ 00:25:00.557 Vendor ID: 0000 00:25:00.557 Subsystem Vendor ID: 0000 00:25:00.557 Serial Number: a77f0125b2185d2cdba7 00:25:00.557 Model Number: Linux 00:25:00.557 Firmware Version: 6.8.9-20 00:25:00.557 Recommended Arb Burst: 0 00:25:00.557 IEEE OUI Identifier: 00 00 00 00:25:00.557 Multi-path I/O 00:25:00.557 May have multiple subsystem ports: No 00:25:00.557 May have multiple controllers: No 00:25:00.557 Associated with SR-IOV VF: No 00:25:00.557 Max Data Transfer Size: Unlimited 00:25:00.557 Max Number of Namespaces: 0 00:25:00.557 Max Number of I/O Queues: 1024 00:25:00.557 NVMe Specification Version (VS): 1.3 00:25:00.557 NVMe Specification Version (Identify): 1.3 00:25:00.558 Maximum Queue Entries: 1024 00:25:00.558 Contiguous Queues Required: No 00:25:00.558 Arbitration Mechanisms Supported 00:25:00.558 Weighted Round Robin: Not Supported 00:25:00.558 Vendor Specific: Not Supported 00:25:00.558 Reset Timeout: 7500 ms 00:25:00.558 Doorbell Stride: 4 bytes 00:25:00.558 NVM Subsystem Reset: Not Supported 00:25:00.558 Command Sets Supported 00:25:00.558 NVM Command Set: Supported 00:25:00.558 Boot Partition: Not Supported 00:25:00.558 Memory Page Size Minimum: 4096 bytes 00:25:00.558 Memory Page Size Maximum: 4096 bytes 00:25:00.558 Persistent Memory Region: Not Supported 00:25:00.558 Optional Asynchronous Events Supported 00:25:00.558 Namespace Attribute Notices: Not Supported 00:25:00.558 Firmware Activation Notices: Not Supported 00:25:00.558 ANA Change Notices: Not Supported 00:25:00.558 PLE Aggregate Log Change Notices: Not Supported 00:25:00.558 LBA Status Info Alert Notices: Not Supported 00:25:00.558 EGE Aggregate Log Change Notices: Not Supported 00:25:00.558 Normal NVM Subsystem Shutdown event: Not Supported 00:25:00.558 Zone Descriptor Change Notices: Not Supported 00:25:00.558 Discovery Log Change Notices: Supported 00:25:00.558 Controller Attributes 00:25:00.558 128-bit Host Identifier: Not Supported 00:25:00.558 Non-Operational Permissive Mode: Not Supported 00:25:00.558 NVM Sets: Not Supported 00:25:00.558 Read Recovery Levels: Not Supported 00:25:00.558 Endurance Groups: Not Supported 00:25:00.558 Predictable Latency Mode: Not Supported 00:25:00.558 Traffic Based Keep ALive: Not Supported 00:25:00.558 Namespace Granularity: Not Supported 00:25:00.558 SQ Associations: Not Supported 00:25:00.558 UUID List: Not Supported 00:25:00.558 Multi-Domain Subsystem: Not Supported 00:25:00.558 Fixed Capacity Management: Not Supported 00:25:00.558 Variable Capacity Management: Not Supported 00:25:00.558 Delete Endurance Group: Not Supported 00:25:00.558 Delete NVM Set: Not Supported 00:25:00.558 Extended LBA Formats Supported: Not Supported 00:25:00.558 Flexible Data Placement Supported: Not Supported 00:25:00.558 00:25:00.558 Controller Memory Buffer Support 00:25:00.558 ================================ 00:25:00.558 Supported: No 00:25:00.558 00:25:00.558 Persistent Memory Region Support 00:25:00.558 ================================ 00:25:00.558 Supported: No 00:25:00.558 00:25:00.558 Admin Command Set Attributes 00:25:00.558 ============================ 00:25:00.558 Security Send/Receive: Not Supported 00:25:00.558 Format NVM: Not Supported 00:25:00.558 Firmware Activate/Download: Not Supported 00:25:00.558 Namespace Management: Not Supported 00:25:00.558 Device Self-Test: Not Supported 00:25:00.558 Directives: Not Supported 00:25:00.558 NVMe-MI: Not Supported 00:25:00.558 Virtualization Management: Not Supported 00:25:00.558 Doorbell Buffer Config: Not Supported 00:25:00.558 Get LBA Status Capability: Not Supported 00:25:00.558 Command & Feature Lockdown Capability: Not Supported 00:25:00.558 Abort Command Limit: 1 00:25:00.558 Async Event Request Limit: 1 00:25:00.558 Number of Firmware Slots: N/A 00:25:00.558 Firmware Slot 1 Read-Only: N/A 00:25:00.558 Firmware Activation Without Reset: N/A 00:25:00.558 Multiple Update Detection Support: N/A 00:25:00.558 Firmware Update Granularity: No Information Provided 00:25:00.558 Per-Namespace SMART Log: No 00:25:00.558 Asymmetric Namespace Access Log Page: Not Supported 00:25:00.558 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:00.558 Command Effects Log Page: Not Supported 00:25:00.558 Get Log Page Extended Data: Supported 00:25:00.558 Telemetry Log Pages: Not Supported 00:25:00.558 Persistent Event Log Pages: Not Supported 00:25:00.558 Supported Log Pages Log Page: May Support 00:25:00.558 Commands Supported & Effects Log Page: Not Supported 00:25:00.558 Feature Identifiers & Effects Log Page:May Support 00:25:00.558 NVMe-MI Commands & Effects Log Page: May Support 00:25:00.558 Data Area 4 for Telemetry Log: Not Supported 00:25:00.558 Error Log Page Entries Supported: 1 00:25:00.558 Keep Alive: Not Supported 00:25:00.558 00:25:00.558 NVM Command Set Attributes 00:25:00.558 ========================== 00:25:00.558 Submission Queue Entry Size 00:25:00.558 Max: 1 00:25:00.558 Min: 1 00:25:00.558 Completion Queue Entry Size 00:25:00.558 Max: 1 00:25:00.558 Min: 1 00:25:00.558 Number of Namespaces: 0 00:25:00.558 Compare Command: Not Supported 00:25:00.558 Write Uncorrectable Command: Not Supported 00:25:00.558 Dataset Management Command: Not Supported 00:25:00.558 Write Zeroes Command: Not Supported 00:25:00.558 Set Features Save Field: Not Supported 00:25:00.558 Reservations: Not Supported 00:25:00.558 Timestamp: Not Supported 00:25:00.558 Copy: Not Supported 00:25:00.558 Volatile Write Cache: Not Present 00:25:00.558 Atomic Write Unit (Normal): 1 00:25:00.558 Atomic Write Unit (PFail): 1 00:25:00.558 Atomic Compare & Write Unit: 1 00:25:00.558 Fused Compare & Write: Not Supported 00:25:00.558 Scatter-Gather List 00:25:00.558 SGL Command Set: Supported 00:25:00.558 SGL Keyed: Not Supported 00:25:00.558 SGL Bit Bucket Descriptor: Not Supported 00:25:00.558 SGL Metadata Pointer: Not Supported 00:25:00.558 Oversized SGL: Not Supported 00:25:00.558 SGL Metadata Address: Not Supported 00:25:00.558 SGL Offset: Supported 00:25:00.558 Transport SGL Data Block: Not Supported 00:25:00.558 Replay Protected Memory Block: Not Supported 00:25:00.558 00:25:00.558 Firmware Slot Information 00:25:00.558 ========================= 00:25:00.558 Active slot: 0 00:25:00.558 00:25:00.558 00:25:00.558 Error Log 00:25:00.558 ========= 00:25:00.558 00:25:00.558 Active Namespaces 00:25:00.558 ================= 00:25:00.558 Discovery Log Page 00:25:00.558 ================== 00:25:00.558 Generation Counter: 2 00:25:00.558 Number of Records: 2 00:25:00.558 Record Format: 0 00:25:00.558 00:25:00.558 Discovery Log Entry 0 00:25:00.558 ---------------------- 00:25:00.558 Transport Type: 3 (TCP) 00:25:00.558 Address Family: 1 (IPv4) 00:25:00.558 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:00.558 Entry Flags: 00:25:00.558 Duplicate Returned Information: 0 00:25:00.558 Explicit Persistent Connection Support for Discovery: 0 00:25:00.558 Transport Requirements: 00:25:00.558 Secure Channel: Not Specified 00:25:00.558 Port ID: 1 (0x0001) 00:25:00.558 Controller ID: 65535 (0xffff) 00:25:00.558 Admin Max SQ Size: 32 00:25:00.558 Transport Service Identifier: 4420 00:25:00.558 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:00.558 Transport Address: 10.0.0.1 00:25:00.558 Discovery Log Entry 1 00:25:00.558 ---------------------- 00:25:00.558 Transport Type: 3 (TCP) 00:25:00.558 Address Family: 1 (IPv4) 00:25:00.558 Subsystem Type: 2 (NVM Subsystem) 00:25:00.558 Entry Flags: 00:25:00.558 Duplicate Returned Information: 0 00:25:00.558 Explicit Persistent Connection Support for Discovery: 0 00:25:00.558 Transport Requirements: 00:25:00.559 Secure Channel: Not Specified 00:25:00.559 Port ID: 1 (0x0001) 00:25:00.559 Controller ID: 65535 (0xffff) 00:25:00.559 Admin Max SQ Size: 32 00:25:00.559 Transport Service Identifier: 4420 00:25:00.559 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:00.559 Transport Address: 10.0.0.1 00:25:00.820 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:00.820 get_feature(0x01) failed 00:25:00.820 get_feature(0x02) failed 00:25:00.821 get_feature(0x04) failed 00:25:00.821 ===================================================== 00:25:00.821 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:00.821 ===================================================== 00:25:00.821 Controller Capabilities/Features 00:25:00.821 ================================ 00:25:00.821 Vendor ID: 0000 00:25:00.821 Subsystem Vendor ID: 0000 00:25:00.821 Serial Number: da2292c32394a4b84fdf 00:25:00.821 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:00.821 Firmware Version: 6.8.9-20 00:25:00.821 Recommended Arb Burst: 6 00:25:00.821 IEEE OUI Identifier: 00 00 00 00:25:00.821 Multi-path I/O 00:25:00.821 May have multiple subsystem ports: Yes 00:25:00.821 May have multiple controllers: Yes 00:25:00.821 Associated with SR-IOV VF: No 00:25:00.821 Max Data Transfer Size: Unlimited 00:25:00.821 Max Number of Namespaces: 1024 00:25:00.821 Max Number of I/O Queues: 128 00:25:00.821 NVMe Specification Version (VS): 1.3 00:25:00.821 NVMe Specification Version (Identify): 1.3 00:25:00.821 Maximum Queue Entries: 1024 00:25:00.821 Contiguous Queues Required: No 00:25:00.821 Arbitration Mechanisms Supported 00:25:00.821 Weighted Round Robin: Not Supported 00:25:00.821 Vendor Specific: Not Supported 00:25:00.821 Reset Timeout: 7500 ms 00:25:00.821 Doorbell Stride: 4 bytes 00:25:00.821 NVM Subsystem Reset: Not Supported 00:25:00.821 Command Sets Supported 00:25:00.821 NVM Command Set: Supported 00:25:00.821 Boot Partition: Not Supported 00:25:00.821 Memory Page Size Minimum: 4096 bytes 00:25:00.821 Memory Page Size Maximum: 4096 bytes 00:25:00.821 Persistent Memory Region: Not Supported 00:25:00.821 Optional Asynchronous Events Supported 00:25:00.821 Namespace Attribute Notices: Supported 00:25:00.821 Firmware Activation Notices: Not Supported 00:25:00.821 ANA Change Notices: Supported 00:25:00.821 PLE Aggregate Log Change Notices: Not Supported 00:25:00.821 LBA Status Info Alert Notices: Not Supported 00:25:00.821 EGE Aggregate Log Change Notices: Not Supported 00:25:00.821 Normal NVM Subsystem Shutdown event: Not Supported 00:25:00.821 Zone Descriptor Change Notices: Not Supported 00:25:00.821 Discovery Log Change Notices: Not Supported 00:25:00.821 Controller Attributes 00:25:00.821 128-bit Host Identifier: Supported 00:25:00.821 Non-Operational Permissive Mode: Not Supported 00:25:00.821 NVM Sets: Not Supported 00:25:00.821 Read Recovery Levels: Not Supported 00:25:00.821 Endurance Groups: Not Supported 00:25:00.821 Predictable Latency Mode: Not Supported 00:25:00.821 Traffic Based Keep ALive: Supported 00:25:00.821 Namespace Granularity: Not Supported 00:25:00.821 SQ Associations: Not Supported 00:25:00.821 UUID List: Not Supported 00:25:00.821 Multi-Domain Subsystem: Not Supported 00:25:00.821 Fixed Capacity Management: Not Supported 00:25:00.821 Variable Capacity Management: Not Supported 00:25:00.821 Delete Endurance Group: Not Supported 00:25:00.821 Delete NVM Set: Not Supported 00:25:00.821 Extended LBA Formats Supported: Not Supported 00:25:00.821 Flexible Data Placement Supported: Not Supported 00:25:00.821 00:25:00.821 Controller Memory Buffer Support 00:25:00.821 ================================ 00:25:00.821 Supported: No 00:25:00.821 00:25:00.821 Persistent Memory Region Support 00:25:00.821 ================================ 00:25:00.821 Supported: No 00:25:00.821 00:25:00.821 Admin Command Set Attributes 00:25:00.821 ============================ 00:25:00.821 Security Send/Receive: Not Supported 00:25:00.821 Format NVM: Not Supported 00:25:00.821 Firmware Activate/Download: Not Supported 00:25:00.821 Namespace Management: Not Supported 00:25:00.821 Device Self-Test: Not Supported 00:25:00.821 Directives: Not Supported 00:25:00.821 NVMe-MI: Not Supported 00:25:00.821 Virtualization Management: Not Supported 00:25:00.821 Doorbell Buffer Config: Not Supported 00:25:00.821 Get LBA Status Capability: Not Supported 00:25:00.821 Command & Feature Lockdown Capability: Not Supported 00:25:00.821 Abort Command Limit: 4 00:25:00.821 Async Event Request Limit: 4 00:25:00.821 Number of Firmware Slots: N/A 00:25:00.821 Firmware Slot 1 Read-Only: N/A 00:25:00.821 Firmware Activation Without Reset: N/A 00:25:00.821 Multiple Update Detection Support: N/A 00:25:00.821 Firmware Update Granularity: No Information Provided 00:25:00.821 Per-Namespace SMART Log: Yes 00:25:00.821 Asymmetric Namespace Access Log Page: Supported 00:25:00.821 ANA Transition Time : 10 sec 00:25:00.821 00:25:00.821 Asymmetric Namespace Access Capabilities 00:25:00.821 ANA Optimized State : Supported 00:25:00.821 ANA Non-Optimized State : Supported 00:25:00.821 ANA Inaccessible State : Supported 00:25:00.821 ANA Persistent Loss State : Supported 00:25:00.821 ANA Change State : Supported 00:25:00.821 ANAGRPID is not changed : No 00:25:00.821 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:00.821 00:25:00.821 ANA Group Identifier Maximum : 128 00:25:00.821 Number of ANA Group Identifiers : 128 00:25:00.821 Max Number of Allowed Namespaces : 1024 00:25:00.821 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:00.821 Command Effects Log Page: Supported 00:25:00.821 Get Log Page Extended Data: Supported 00:25:00.821 Telemetry Log Pages: Not Supported 00:25:00.821 Persistent Event Log Pages: Not Supported 00:25:00.821 Supported Log Pages Log Page: May Support 00:25:00.821 Commands Supported & Effects Log Page: Not Supported 00:25:00.821 Feature Identifiers & Effects Log Page:May Support 00:25:00.821 NVMe-MI Commands & Effects Log Page: May Support 00:25:00.821 Data Area 4 for Telemetry Log: Not Supported 00:25:00.821 Error Log Page Entries Supported: 128 00:25:00.821 Keep Alive: Supported 00:25:00.821 Keep Alive Granularity: 1000 ms 00:25:00.821 00:25:00.821 NVM Command Set Attributes 00:25:00.821 ========================== 00:25:00.821 Submission Queue Entry Size 00:25:00.821 Max: 64 00:25:00.821 Min: 64 00:25:00.821 Completion Queue Entry Size 00:25:00.821 Max: 16 00:25:00.821 Min: 16 00:25:00.821 Number of Namespaces: 1024 00:25:00.821 Compare Command: Not Supported 00:25:00.821 Write Uncorrectable Command: Not Supported 00:25:00.821 Dataset Management Command: Supported 00:25:00.821 Write Zeroes Command: Supported 00:25:00.821 Set Features Save Field: Not Supported 00:25:00.821 Reservations: Not Supported 00:25:00.821 Timestamp: Not Supported 00:25:00.821 Copy: Not Supported 00:25:00.821 Volatile Write Cache: Present 00:25:00.821 Atomic Write Unit (Normal): 1 00:25:00.821 Atomic Write Unit (PFail): 1 00:25:00.821 Atomic Compare & Write Unit: 1 00:25:00.821 Fused Compare & Write: Not Supported 00:25:00.821 Scatter-Gather List 00:25:00.821 SGL Command Set: Supported 00:25:00.821 SGL Keyed: Not Supported 00:25:00.821 SGL Bit Bucket Descriptor: Not Supported 00:25:00.821 SGL Metadata Pointer: Not Supported 00:25:00.821 Oversized SGL: Not Supported 00:25:00.821 SGL Metadata Address: Not Supported 00:25:00.822 SGL Offset: Supported 00:25:00.822 Transport SGL Data Block: Not Supported 00:25:00.822 Replay Protected Memory Block: Not Supported 00:25:00.822 00:25:00.822 Firmware Slot Information 00:25:00.822 ========================= 00:25:00.822 Active slot: 0 00:25:00.822 00:25:00.822 Asymmetric Namespace Access 00:25:00.822 =========================== 00:25:00.822 Change Count : 0 00:25:00.822 Number of ANA Group Descriptors : 1 00:25:00.822 ANA Group Descriptor : 0 00:25:00.822 ANA Group ID : 1 00:25:00.822 Number of NSID Values : 1 00:25:00.822 Change Count : 0 00:25:00.822 ANA State : 1 00:25:00.822 Namespace Identifier : 1 00:25:00.822 00:25:00.822 Commands Supported and Effects 00:25:00.822 ============================== 00:25:00.822 Admin Commands 00:25:00.822 -------------- 00:25:00.822 Get Log Page (02h): Supported 00:25:00.822 Identify (06h): Supported 00:25:00.822 Abort (08h): Supported 00:25:00.822 Set Features (09h): Supported 00:25:00.822 Get Features (0Ah): Supported 00:25:00.822 Asynchronous Event Request (0Ch): Supported 00:25:00.822 Keep Alive (18h): Supported 00:25:00.822 I/O Commands 00:25:00.822 ------------ 00:25:00.822 Flush (00h): Supported 00:25:00.822 Write (01h): Supported LBA-Change 00:25:00.822 Read (02h): Supported 00:25:00.822 Write Zeroes (08h): Supported LBA-Change 00:25:00.822 Dataset Management (09h): Supported 00:25:00.822 00:25:00.822 Error Log 00:25:00.822 ========= 00:25:00.822 Entry: 0 00:25:00.822 Error Count: 0x3 00:25:00.822 Submission Queue Id: 0x0 00:25:00.822 Command Id: 0x5 00:25:00.822 Phase Bit: 0 00:25:00.822 Status Code: 0x2 00:25:00.822 Status Code Type: 0x0 00:25:00.822 Do Not Retry: 1 00:25:00.822 Error Location: 0x28 00:25:00.822 LBA: 0x0 00:25:00.822 Namespace: 0x0 00:25:00.822 Vendor Log Page: 0x0 00:25:00.822 ----------- 00:25:00.822 Entry: 1 00:25:00.822 Error Count: 0x2 00:25:00.822 Submission Queue Id: 0x0 00:25:00.822 Command Id: 0x5 00:25:00.822 Phase Bit: 0 00:25:00.822 Status Code: 0x2 00:25:00.822 Status Code Type: 0x0 00:25:00.822 Do Not Retry: 1 00:25:00.822 Error Location: 0x28 00:25:00.822 LBA: 0x0 00:25:00.822 Namespace: 0x0 00:25:00.822 Vendor Log Page: 0x0 00:25:00.822 ----------- 00:25:00.822 Entry: 2 00:25:00.822 Error Count: 0x1 00:25:00.822 Submission Queue Id: 0x0 00:25:00.822 Command Id: 0x4 00:25:00.822 Phase Bit: 0 00:25:00.822 Status Code: 0x2 00:25:00.822 Status Code Type: 0x0 00:25:00.822 Do Not Retry: 1 00:25:00.822 Error Location: 0x28 00:25:00.822 LBA: 0x0 00:25:00.822 Namespace: 0x0 00:25:00.822 Vendor Log Page: 0x0 00:25:00.822 00:25:00.822 Number of Queues 00:25:00.822 ================ 00:25:00.822 Number of I/O Submission Queues: 128 00:25:00.822 Number of I/O Completion Queues: 128 00:25:00.822 00:25:00.822 ZNS Specific Controller Data 00:25:00.822 ============================ 00:25:00.822 Zone Append Size Limit: 0 00:25:00.822 00:25:00.822 00:25:00.822 Active Namespaces 00:25:00.822 ================= 00:25:00.822 get_feature(0x05) failed 00:25:00.822 Namespace ID:1 00:25:00.822 Command Set Identifier: NVM (00h) 00:25:00.822 Deallocate: Supported 00:25:00.822 Deallocated/Unwritten Error: Not Supported 00:25:00.822 Deallocated Read Value: Unknown 00:25:00.822 Deallocate in Write Zeroes: Not Supported 00:25:00.822 Deallocated Guard Field: 0xFFFF 00:25:00.822 Flush: Supported 00:25:00.822 Reservation: Not Supported 00:25:00.822 Namespace Sharing Capabilities: Multiple Controllers 00:25:00.822 Size (in LBAs): 1953525168 (931GiB) 00:25:00.822 Capacity (in LBAs): 1953525168 (931GiB) 00:25:00.822 Utilization (in LBAs): 1953525168 (931GiB) 00:25:00.822 UUID: 3186bc98-d051-4367-b3dc-a0b3c53a3374 00:25:00.822 Thin Provisioning: Not Supported 00:25:00.822 Per-NS Atomic Units: Yes 00:25:00.822 Atomic Boundary Size (Normal): 0 00:25:00.822 Atomic Boundary Size (PFail): 0 00:25:00.822 Atomic Boundary Offset: 0 00:25:00.822 NGUID/EUI64 Never Reused: No 00:25:00.822 ANA group ID: 1 00:25:00.822 Namespace Write Protected: No 00:25:00.822 Number of LBA Formats: 1 00:25:00.822 Current LBA Format: LBA Format #00 00:25:00.822 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:00.822 00:25:00.822 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:00.822 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.822 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:00.822 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.822 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:00.822 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.822 09:57:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.822 rmmod nvme_tcp 00:25:00.822 rmmod nvme_fabrics 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.822 09:57:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:03.360 09:57:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:05.896 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:05.896 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:06.834 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:06.834 00:25:06.834 real 0m16.750s 00:25:06.834 user 0m4.296s 00:25:06.834 sys 0m8.828s 00:25:06.834 09:57:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.834 09:57:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.834 ************************************ 00:25:06.834 END TEST nvmf_identify_kernel_target 00:25:06.834 ************************************ 00:25:06.834 09:57:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:06.834 09:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:06.834 09:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.834 09:57:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.834 ************************************ 00:25:06.834 START TEST nvmf_auth_host 00:25:06.834 ************************************ 00:25:06.834 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:07.094 * Looking for test storage... 00:25:07.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # lcov --version 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:25:07.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.094 --rc genhtml_branch_coverage=1 00:25:07.094 --rc genhtml_function_coverage=1 00:25:07.094 --rc genhtml_legend=1 00:25:07.094 --rc geninfo_all_blocks=1 00:25:07.094 --rc geninfo_unexecuted_blocks=1 00:25:07.094 00:25:07.094 ' 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:25:07.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.094 --rc genhtml_branch_coverage=1 00:25:07.094 --rc genhtml_function_coverage=1 00:25:07.094 --rc genhtml_legend=1 00:25:07.094 --rc geninfo_all_blocks=1 00:25:07.094 --rc geninfo_unexecuted_blocks=1 00:25:07.094 00:25:07.094 ' 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:25:07.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.094 --rc genhtml_branch_coverage=1 00:25:07.094 --rc genhtml_function_coverage=1 00:25:07.094 --rc genhtml_legend=1 00:25:07.094 --rc geninfo_all_blocks=1 00:25:07.094 --rc geninfo_unexecuted_blocks=1 00:25:07.094 00:25:07.094 ' 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:25:07.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.094 --rc genhtml_branch_coverage=1 00:25:07.094 --rc genhtml_function_coverage=1 00:25:07.094 --rc genhtml_legend=1 00:25:07.094 --rc geninfo_all_blocks=1 00:25:07.094 --rc geninfo_unexecuted_blocks=1 00:25:07.094 00:25:07.094 ' 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.094 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.095 09:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:13.667 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:13.667 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:13.667 Found net devices under 0000:86:00.0: cvl_0_0 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.667 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:13.668 Found net devices under 0000:86:00.1: cvl_0_1 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.668 09:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:25:13.668 00:25:13.668 --- 10.0.0.2 ping statistics --- 00:25:13.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.668 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:25:13.668 00:25:13.668 --- 10.0.0.1 ping statistics --- 00:25:13.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.668 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3043985 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3043985 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3043985 ']' 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7165807fff1d7f123419df097f7f15ac 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.TCc 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7165807fff1d7f123419df097f7f15ac 0 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7165807fff1d7f123419df097f7f15ac 0 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7165807fff1d7f123419df097f7f15ac 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.TCc 00:25:13.668 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.TCc 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.TCc 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d7a3062a52b060855438d596a60fce6d84f32084b8b328f02815171b2dc96954 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.x4a 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d7a3062a52b060855438d596a60fce6d84f32084b8b328f02815171b2dc96954 3 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d7a3062a52b060855438d596a60fce6d84f32084b8b328f02815171b2dc96954 3 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d7a3062a52b060855438d596a60fce6d84f32084b8b328f02815171b2dc96954 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.x4a 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.x4a 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.x4a 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=86088f414565bc18cdc8492a707cd3cebf674c5cdbb84d81 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hG7 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 86088f414565bc18cdc8492a707cd3cebf674c5cdbb84d81 0 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 86088f414565bc18cdc8492a707cd3cebf674c5cdbb84d81 0 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=86088f414565bc18cdc8492a707cd3cebf674c5cdbb84d81 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hG7 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hG7 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.hG7 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1fbfff6cd29a655b90482f42fe4ab7b5f603d1fbfa71a133 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.KgR 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1fbfff6cd29a655b90482f42fe4ab7b5f603d1fbfa71a133 2 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1fbfff6cd29a655b90482f42fe4ab7b5f603d1fbfa71a133 2 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1fbfff6cd29a655b90482f42fe4ab7b5f603d1fbfa71a133 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.KgR 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.KgR 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.KgR 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b9b599bda8726310b93659b45a01dd91 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.48v 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b9b599bda8726310b93659b45a01dd91 1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b9b599bda8726310b93659b45a01dd91 1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b9b599bda8726310b93659b45a01dd91 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.48v 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.48v 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.48v 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=19ba5d98f3eb973781210210b63ecf90 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.d7H 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 19ba5d98f3eb973781210210b63ecf90 1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 19ba5d98f3eb973781210210b63ecf90 1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=19ba5d98f3eb973781210210b63ecf90 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.d7H 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.d7H 00:25:13.669 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.d7H 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e2896499d84fb738510fbc8b90fd06430fe429bb9a616f19 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eVN 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e2896499d84fb738510fbc8b90fd06430fe429bb9a616f19 2 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e2896499d84fb738510fbc8b90fd06430fe429bb9a616f19 2 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e2896499d84fb738510fbc8b90fd06430fe429bb9a616f19 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eVN 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eVN 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.eVN 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6e5e4210548e072394a9c8ff6cf2564c 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iAE 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6e5e4210548e072394a9c8ff6cf2564c 0 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6e5e4210548e072394a9c8ff6cf2564c 0 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.670 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.930 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6e5e4210548e072394a9c8ff6cf2564c 00:25:13.930 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:13.930 09:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iAE 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iAE 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.iAE 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6af2524b94cf9531ce1211ba2fd5a74a8a4bb26d9d6570eff56b99cd87a7e8a4 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Njp 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6af2524b94cf9531ce1211ba2fd5a74a8a4bb26d9d6570eff56b99cd87a7e8a4 3 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6af2524b94cf9531ce1211ba2fd5a74a8a4bb26d9d6570eff56b99cd87a7e8a4 3 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6af2524b94cf9531ce1211ba2fd5a74a8a4bb26d9d6570eff56b99cd87a7e8a4 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Njp 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Njp 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Njp 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3043985 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3043985 ']' 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.930 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TCc 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.x4a ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.x4a 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.hG7 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.KgR ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KgR 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.48v 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.d7H ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.d7H 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.eVN 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.iAE ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.iAE 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Njp 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:14.190 09:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:16.725 Waiting for block devices as requested 00:25:16.984 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:16.984 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:16.984 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.243 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.243 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:17.243 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:17.243 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:17.502 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:17.503 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:17.503 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:17.762 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:17.762 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:17.762 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:17.762 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:18.021 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:18.021 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:18.021 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:18.590 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:18.590 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:18.590 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:18.590 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:18.590 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:18.590 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:18.590 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:18.590 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:18.590 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:18.850 No valid GPT data, bailing 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:18.850 09:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:18.850 00:25:18.850 Discovery Log Number of Records 2, Generation counter 2 00:25:18.850 =====Discovery Log Entry 0====== 00:25:18.850 trtype: tcp 00:25:18.850 adrfam: ipv4 00:25:18.850 subtype: current discovery subsystem 00:25:18.850 treq: not specified, sq flow control disable supported 00:25:18.850 portid: 1 00:25:18.850 trsvcid: 4420 00:25:18.850 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:18.850 traddr: 10.0.0.1 00:25:18.850 eflags: none 00:25:18.850 sectype: none 00:25:18.850 =====Discovery Log Entry 1====== 00:25:18.850 trtype: tcp 00:25:18.850 adrfam: ipv4 00:25:18.850 subtype: nvme subsystem 00:25:18.850 treq: not specified, sq flow control disable supported 00:25:18.850 portid: 1 00:25:18.850 trsvcid: 4420 00:25:18.850 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:18.850 traddr: 10.0.0.1 00:25:18.850 eflags: none 00:25:18.850 sectype: none 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.850 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.109 nvme0n1 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:19.109 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.110 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 nvme0n1 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 nvme0n1 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.370 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.630 nvme0n1 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.630 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.890 09:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.890 nvme0n1 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.891 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.151 nvme0n1 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.151 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.411 nvme0n1 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.411 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.671 nvme0n1 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.671 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.672 09:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.932 nvme0n1 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.932 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.192 nvme0n1 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.192 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.452 nvme0n1 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.452 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.453 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.712 nvme0n1 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.712 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.713 09:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.972 nvme0n1 00:25:21.972 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.972 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.972 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.972 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.972 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.972 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.973 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.233 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.493 nvme0n1 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.493 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.753 nvme0n1 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.753 09:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.012 nvme0n1 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.013 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.582 nvme0n1 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.582 09:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.841 nvme0n1 00:25:23.841 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.841 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.842 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.842 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.842 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.842 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.842 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.842 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.842 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.842 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.102 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.362 nvme0n1 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.362 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.363 09:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 nvme0n1 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.931 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.190 nvme0n1 00:25:25.190 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.190 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.190 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.190 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.190 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.190 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.450 09:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.021 nvme0n1 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.021 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.595 nvme0n1 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.595 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.596 09:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.163 nvme0n1 00:25:27.163 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.423 09:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.991 nvme0n1 00:25:27.991 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.991 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.991 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.991 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.991 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.992 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.651 nvme0n1 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.651 09:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.940 nvme0n1 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.940 nvme0n1 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.940 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.200 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.201 nvme0n1 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.201 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.460 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.461 nvme0n1 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.461 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.721 nvme0n1 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.721 09:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.981 nvme0n1 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.981 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.982 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.982 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.982 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.241 nvme0n1 00:25:30.241 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.241 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.241 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.241 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.241 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.241 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.241 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.242 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.502 nvme0n1 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.502 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.761 nvme0n1 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.761 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.762 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.762 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:30.762 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.762 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.762 09:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.762 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.021 nvme0n1 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.021 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.022 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.022 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.022 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.022 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.281 nvme0n1 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.281 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.540 nvme0n1 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.540 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.800 09:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.061 nvme0n1 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.061 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.321 nvme0n1 00:25:32.321 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.321 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.321 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.321 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.321 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.321 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.321 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.322 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.582 nvme0n1 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.582 09:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.151 nvme0n1 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.151 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.411 nvme0n1 00:25:33.411 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.411 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.411 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.411 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.411 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.411 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.411 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.671 09:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.931 nvme0n1 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.931 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.932 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.501 nvme0n1 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.501 09:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.760 nvme0n1 00:25:34.760 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.760 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.760 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.760 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.760 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.760 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.020 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.587 nvme0n1 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.587 09:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.155 nvme0n1 00:25:36.155 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.155 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.155 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.156 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.415 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.415 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.415 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.415 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.415 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.415 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.415 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.415 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.415 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.416 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.416 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.416 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.416 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.416 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.416 09:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.984 nvme0n1 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.984 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.554 nvme0n1 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.554 09:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.123 nvme0n1 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.123 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.383 nvme0n1 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.383 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.643 nvme0n1 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.643 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.644 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.644 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.644 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.644 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.644 09:58:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.903 nvme0n1 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.903 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.162 nvme0n1 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.163 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.423 nvme0n1 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.423 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 nvme0n1 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.684 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.685 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.685 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.685 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.685 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.685 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.685 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.685 09:58:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.944 nvme0n1 00:25:39.944 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.944 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.944 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.944 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.944 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.945 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.205 nvme0n1 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.205 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.465 nvme0n1 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.465 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.725 nvme0n1 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.725 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.726 09:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.985 nvme0n1 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.985 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.986 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.246 nvme0n1 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.246 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.506 nvme0n1 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.506 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.766 09:58:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.026 nvme0n1 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.026 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.286 nvme0n1 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.286 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.854 nvme0n1 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.854 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.855 09:58:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.114 nvme0n1 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.114 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.684 nvme0n1 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.684 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.685 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.685 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.685 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.685 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.685 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.685 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.685 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.685 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.685 09:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.944 nvme0n1 00:25:43.944 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.944 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.944 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.944 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.944 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.944 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.203 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.464 nvme0n1 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzE2NTgwN2ZmZjFkN2YxMjM0MTlkZjA5N2Y3ZjE1YWNAb27g: 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: ]] 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDdhMzA2MmE1MmIwNjA4NTU0MzhkNTk2YTYwZmNlNmQ4NGYzMjA4NGI4YjMyOGYwMjgxNTE3MWIyZGM5Njk1NKruiYE=: 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.464 09:58:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.409 nvme0n1 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:45.409 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.410 09:58:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.978 nvme0n1 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.978 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.979 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.979 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.979 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.979 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.546 nvme0n1 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.546 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTI4OTY0OTlkODRmYjczODUxMGZiYzhiOTBmZDA2NDMwZmU0MjliYjlhNjE2ZjE5NF8/LQ==: 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: ]] 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmU1ZTQyMTA1NDhlMDcyMzk0YTljOGZmNmNmMjU2NGNAgAKI: 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.547 09:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.115 nvme0n1 00:25:47.115 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.115 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.115 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.115 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.115 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.115 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.115 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFmMjUyNGI5NGNmOTUzMWNlMTIxMWJhMmZkNWE3NGE4YTRiYjI2ZDlkNjU3MGVmZjU2Yjk5Y2Q4N2E3ZThhNFNmp1M=: 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.116 09:58:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.053 nvme0n1 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.053 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 request: 00:25:48.054 { 00:25:48.054 "name": "nvme0", 00:25:48.054 "trtype": "tcp", 00:25:48.054 "traddr": "10.0.0.1", 00:25:48.054 "adrfam": "ipv4", 00:25:48.054 "trsvcid": "4420", 00:25:48.054 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:48.054 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:48.054 "prchk_reftag": false, 00:25:48.054 "prchk_guard": false, 00:25:48.054 "hdgst": false, 00:25:48.054 "ddgst": false, 00:25:48.054 "allow_unrecognized_csi": false, 00:25:48.054 "method": "bdev_nvme_attach_controller", 00:25:48.054 "req_id": 1 00:25:48.054 } 00:25:48.054 Got JSON-RPC error response 00:25:48.054 response: 00:25:48.054 { 00:25:48.054 "code": -5, 00:25:48.054 "message": "Input/output error" 00:25:48.054 } 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 request: 00:25:48.054 { 00:25:48.054 "name": "nvme0", 00:25:48.054 "trtype": "tcp", 00:25:48.054 "traddr": "10.0.0.1", 00:25:48.054 "adrfam": "ipv4", 00:25:48.054 "trsvcid": "4420", 00:25:48.054 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:48.054 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:48.054 "prchk_reftag": false, 00:25:48.054 "prchk_guard": false, 00:25:48.054 "hdgst": false, 00:25:48.054 "ddgst": false, 00:25:48.054 "dhchap_key": "key2", 00:25:48.054 "allow_unrecognized_csi": false, 00:25:48.054 "method": "bdev_nvme_attach_controller", 00:25:48.054 "req_id": 1 00:25:48.054 } 00:25:48.054 Got JSON-RPC error response 00:25:48.054 response: 00:25:48.054 { 00:25:48.054 "code": -5, 00:25:48.054 "message": "Input/output error" 00:25:48.054 } 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:48.054 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:48.055 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:48.055 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:48.055 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.055 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:48.055 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.055 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:48.055 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.055 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.314 request: 00:25:48.314 { 00:25:48.314 "name": "nvme0", 00:25:48.314 "trtype": "tcp", 00:25:48.314 "traddr": "10.0.0.1", 00:25:48.314 "adrfam": "ipv4", 00:25:48.314 "trsvcid": "4420", 00:25:48.314 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:48.314 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:48.314 "prchk_reftag": false, 00:25:48.314 "prchk_guard": false, 00:25:48.314 "hdgst": false, 00:25:48.314 "ddgst": false, 00:25:48.314 "dhchap_key": "key1", 00:25:48.314 "dhchap_ctrlr_key": "ckey2", 00:25:48.314 "allow_unrecognized_csi": false, 00:25:48.314 "method": "bdev_nvme_attach_controller", 00:25:48.314 "req_id": 1 00:25:48.314 } 00:25:48.314 Got JSON-RPC error response 00:25:48.314 response: 00:25:48.314 { 00:25:48.314 "code": -5, 00:25:48.314 "message": "Input/output error" 00:25:48.314 } 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.314 nvme0n1 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.314 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.315 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.574 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.574 request: 00:25:48.574 { 00:25:48.574 "name": "nvme0", 00:25:48.574 "dhchap_key": "key1", 00:25:48.574 "dhchap_ctrlr_key": "ckey2", 00:25:48.574 "method": "bdev_nvme_set_keys", 00:25:48.575 "req_id": 1 00:25:48.575 } 00:25:48.575 Got JSON-RPC error response 00:25:48.575 response: 00:25:48.575 { 00:25:48.575 "code": -13, 00:25:48.575 "message": "Permission denied" 00:25:48.575 } 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:48.575 09:58:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:49.513 09:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.513 09:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:49.513 09:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.513 09:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.513 09:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.513 09:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:49.513 09:58:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYwODhmNDE0NTY1YmMxOGNkYzg0OTJhNzA3Y2QzY2ViZjY3NGM1Y2RiYjg0ZDgxAlzMaw==: 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: ]] 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWZiZmZmNmNkMjlhNjU1YjkwNDgyZjQyZmU0YWI3YjVmNjAzZDFmYmZhNzFhMTMzkixIWg==: 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.892 nvme0n1 00:25:50.892 09:58:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjliNTk5YmRhODcyNjMxMGI5MzY1OWI0NWEwMWRkOTEjJcdg: 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: ]] 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTliYTVkOThmM2ViOTczNzgxMjEwMjEwYjYzZWNmOTDI/eAY: 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:50.892 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.893 request: 00:25:50.893 { 00:25:50.893 "name": "nvme0", 00:25:50.893 "dhchap_key": "key2", 00:25:50.893 "dhchap_ctrlr_key": "ckey1", 00:25:50.893 "method": "bdev_nvme_set_keys", 00:25:50.893 "req_id": 1 00:25:50.893 } 00:25:50.893 Got JSON-RPC error response 00:25:50.893 response: 00:25:50.893 { 00:25:50.893 "code": -13, 00:25:50.893 "message": "Permission denied" 00:25:50.893 } 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:50.893 09:58:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:51.830 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.830 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:51.830 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.830 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.830 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.089 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:52.089 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:52.089 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:52.089 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:52.090 rmmod nvme_tcp 00:25:52.090 rmmod nvme_fabrics 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3043985 ']' 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3043985 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3043985 ']' 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3043985 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043985 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043985' 00:25:52.090 killing process with pid 3043985 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3043985 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3043985 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.090 09:58:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:54.669 09:58:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:57.205 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:57.205 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:58.143 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:58.143 09:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.TCc /tmp/spdk.key-null.hG7 /tmp/spdk.key-sha256.48v /tmp/spdk.key-sha384.eVN /tmp/spdk.key-sha512.Njp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:58.143 09:58:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:01.437 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:01.437 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:01.437 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:01.437 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:01.437 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:01.437 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:01.437 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:01.437 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:01.437 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:01.437 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:01.437 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:01.438 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:01.438 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:01.438 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:01.438 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:01.438 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:01.438 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:01.438 00:26:01.438 real 0m54.161s 00:26:01.438 user 0m48.876s 00:26:01.438 sys 0m12.663s 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.438 ************************************ 00:26:01.438 END TEST nvmf_auth_host 00:26:01.438 ************************************ 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.438 ************************************ 00:26:01.438 START TEST nvmf_digest 00:26:01.438 ************************************ 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:01.438 * Looking for test storage... 00:26:01.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1703 -- # lcov --version 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:26:01.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.438 --rc genhtml_branch_coverage=1 00:26:01.438 --rc genhtml_function_coverage=1 00:26:01.438 --rc genhtml_legend=1 00:26:01.438 --rc geninfo_all_blocks=1 00:26:01.438 --rc geninfo_unexecuted_blocks=1 00:26:01.438 00:26:01.438 ' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:26:01.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.438 --rc genhtml_branch_coverage=1 00:26:01.438 --rc genhtml_function_coverage=1 00:26:01.438 --rc genhtml_legend=1 00:26:01.438 --rc geninfo_all_blocks=1 00:26:01.438 --rc geninfo_unexecuted_blocks=1 00:26:01.438 00:26:01.438 ' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:26:01.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.438 --rc genhtml_branch_coverage=1 00:26:01.438 --rc genhtml_function_coverage=1 00:26:01.438 --rc genhtml_legend=1 00:26:01.438 --rc geninfo_all_blocks=1 00:26:01.438 --rc geninfo_unexecuted_blocks=1 00:26:01.438 00:26:01.438 ' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:26:01.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.438 --rc genhtml_branch_coverage=1 00:26:01.438 --rc genhtml_function_coverage=1 00:26:01.438 --rc genhtml_legend=1 00:26:01.438 --rc geninfo_all_blocks=1 00:26:01.438 --rc geninfo_unexecuted_blocks=1 00:26:01.438 00:26:01.438 ' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:01.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.438 09:58:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:08.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:08.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.015 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:08.016 Found net devices under 0000:86:00.0: cvl_0_0 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:08.016 Found net devices under 0000:86:00.1: cvl_0_1 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:08.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:26:08.016 00:26:08.016 --- 10.0.0.2 ping statistics --- 00:26:08.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.016 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:26:08.016 00:26:08.016 --- 10.0.0.1 ping statistics --- 00:26:08.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.016 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:08.016 ************************************ 00:26:08.016 START TEST nvmf_digest_clean 00:26:08.016 ************************************ 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3057744 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3057744 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3057744 ']' 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.016 [2024-11-20 09:58:30.571984] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:08.016 [2024-11-20 09:58:30.572030] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.016 [2024-11-20 09:58:30.632543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.016 [2024-11-20 09:58:30.671221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.016 [2024-11-20 09:58:30.671255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.016 [2024-11-20 09:58:30.671263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.016 [2024-11-20 09:58:30.671270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.016 [2024-11-20 09:58:30.671274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.016 [2024-11-20 09:58:30.671852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.016 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.016 null0 00:26:08.016 [2024-11-20 09:58:30.850699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.016 [2024-11-20 09:58:30.874924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3057765 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3057765 /var/tmp/bperf.sock 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3057765 ']' 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:08.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.017 09:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:08.017 [2024-11-20 09:58:30.928684] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:08.017 [2024-11-20 09:58:30.928727] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057765 ] 00:26:08.017 [2024-11-20 09:58:31.005041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.017 [2024-11-20 09:58:31.047533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.017 09:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.017 09:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:08.017 09:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:08.017 09:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:08.017 09:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:08.277 09:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.277 09:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.536 nvme0n1 00:26:08.536 09:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:08.536 09:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:08.536 Running I/O for 2 seconds... 00:26:10.853 25905.00 IOPS, 101.19 MiB/s [2024-11-20T08:58:34.185Z] 25007.50 IOPS, 97.69 MiB/s 00:26:10.853 Latency(us) 00:26:10.853 [2024-11-20T08:58:34.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.853 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:10.853 nvme0n1 : 2.00 25023.73 97.75 0.00 0.00 5111.00 2578.70 11682.50 00:26:10.853 [2024-11-20T08:58:34.185Z] =================================================================================================================== 00:26:10.853 [2024-11-20T08:58:34.185Z] Total : 25023.73 97.75 0.00 0.00 5111.00 2578.70 11682.50 00:26:10.853 { 00:26:10.853 "results": [ 00:26:10.853 { 00:26:10.853 "job": "nvme0n1", 00:26:10.853 "core_mask": "0x2", 00:26:10.853 "workload": "randread", 00:26:10.853 "status": "finished", 00:26:10.853 "queue_depth": 128, 00:26:10.853 "io_size": 4096, 00:26:10.853 "runtime": 2.003818, 00:26:10.853 "iops": 25023.729700002696, 00:26:10.853 "mibps": 97.74894414063553, 00:26:10.853 "io_failed": 0, 00:26:10.853 "io_timeout": 0, 00:26:10.853 "avg_latency_us": 5110.996705110341, 00:26:10.853 "min_latency_us": 2578.6991304347825, 00:26:10.853 "max_latency_us": 11682.504347826087 00:26:10.853 } 00:26:10.853 ], 00:26:10.853 "core_count": 1 00:26:10.853 } 00:26:10.853 09:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:10.853 09:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:10.853 09:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:10.853 09:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:10.853 | select(.opcode=="crc32c") 00:26:10.853 | "\(.module_name) \(.executed)"' 00:26:10.853 09:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3057765 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3057765 ']' 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3057765 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057765 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057765' 00:26:10.853 killing process with pid 3057765 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3057765 00:26:10.853 Received shutdown signal, test time was about 2.000000 seconds 00:26:10.853 00:26:10.853 Latency(us) 00:26:10.853 [2024-11-20T08:58:34.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.853 [2024-11-20T08:58:34.185Z] =================================================================================================================== 00:26:10.853 [2024-11-20T08:58:34.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:10.853 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3057765 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3058247 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3058247 /var/tmp/bperf.sock 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3058247 ']' 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.113 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.113 [2024-11-20 09:58:34.350888] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:11.113 [2024-11-20 09:58:34.350937] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058247 ] 00:26:11.113 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:11.113 Zero copy mechanism will not be used. 00:26:11.113 [2024-11-20 09:58:34.426527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.373 [2024-11-20 09:58:34.469391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.373 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.373 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:11.373 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:11.373 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:11.373 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:11.632 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.632 09:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.201 nvme0n1 00:26:12.201 09:58:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:12.201 09:58:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.201 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:12.201 Zero copy mechanism will not be used. 00:26:12.201 Running I/O for 2 seconds... 00:26:14.073 5675.00 IOPS, 709.38 MiB/s [2024-11-20T08:58:37.405Z] 5790.50 IOPS, 723.81 MiB/s 00:26:14.073 Latency(us) 00:26:14.073 [2024-11-20T08:58:37.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.073 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:14.073 nvme0n1 : 2.00 5791.37 723.92 0.00 0.00 2760.32 940.30 8491.19 00:26:14.073 [2024-11-20T08:58:37.405Z] =================================================================================================================== 00:26:14.073 [2024-11-20T08:58:37.405Z] Total : 5791.37 723.92 0.00 0.00 2760.32 940.30 8491.19 00:26:14.073 { 00:26:14.073 "results": [ 00:26:14.073 { 00:26:14.073 "job": "nvme0n1", 00:26:14.073 "core_mask": "0x2", 00:26:14.073 "workload": "randread", 00:26:14.073 "status": "finished", 00:26:14.073 "queue_depth": 16, 00:26:14.073 "io_size": 131072, 00:26:14.073 "runtime": 2.002464, 00:26:14.073 "iops": 5791.365038272847, 00:26:14.073 "mibps": 723.9206297841059, 00:26:14.073 "io_failed": 0, 00:26:14.073 "io_timeout": 0, 00:26:14.073 "avg_latency_us": 2760.3201847554274, 00:26:14.073 "min_latency_us": 940.2991304347826, 00:26:14.073 "max_latency_us": 8491.186086956523 00:26:14.073 } 00:26:14.073 ], 00:26:14.073 "core_count": 1 00:26:14.073 } 00:26:14.073 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:14.073 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:14.073 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:14.073 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:14.073 | select(.opcode=="crc32c") 00:26:14.073 | "\(.module_name) \(.executed)"' 00:26:14.073 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3058247 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3058247 ']' 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3058247 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058247 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058247' 00:26:14.333 killing process with pid 3058247 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3058247 00:26:14.333 Received shutdown signal, test time was about 2.000000 seconds 00:26:14.333 00:26:14.333 Latency(us) 00:26:14.333 [2024-11-20T08:58:37.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.333 [2024-11-20T08:58:37.665Z] =================================================================================================================== 00:26:14.333 [2024-11-20T08:58:37.665Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.333 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3058247 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3058929 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3058929 /var/tmp/bperf.sock 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3058929 ']' 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.592 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.592 [2024-11-20 09:58:37.807843] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:14.592 [2024-11-20 09:58:37.807891] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058929 ] 00:26:14.592 [2024-11-20 09:58:37.882298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.592 [2024-11-20 09:58:37.922417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.852 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.852 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:14.852 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:14.852 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:14.852 09:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.111 09:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.111 09:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.371 nvme0n1 00:26:15.371 09:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.371 09:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.630 Running I/O for 2 seconds... 00:26:17.501 27480.00 IOPS, 107.34 MiB/s [2024-11-20T08:58:40.833Z] 27374.00 IOPS, 106.93 MiB/s 00:26:17.501 Latency(us) 00:26:17.501 [2024-11-20T08:58:40.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.502 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:17.502 nvme0n1 : 2.01 27395.75 107.01 0.00 0.00 4666.49 2265.27 15728.64 00:26:17.502 [2024-11-20T08:58:40.834Z] =================================================================================================================== 00:26:17.502 [2024-11-20T08:58:40.834Z] Total : 27395.75 107.01 0.00 0.00 4666.49 2265.27 15728.64 00:26:17.502 { 00:26:17.502 "results": [ 00:26:17.502 { 00:26:17.502 "job": "nvme0n1", 00:26:17.502 "core_mask": "0x2", 00:26:17.502 "workload": "randwrite", 00:26:17.502 "status": "finished", 00:26:17.502 "queue_depth": 128, 00:26:17.502 "io_size": 4096, 00:26:17.502 "runtime": 2.006041, 00:26:17.502 "iops": 27395.751133700658, 00:26:17.502 "mibps": 107.0146528660182, 00:26:17.502 "io_failed": 0, 00:26:17.502 "io_timeout": 0, 00:26:17.502 "avg_latency_us": 4666.494859047904, 00:26:17.502 "min_latency_us": 2265.2660869565216, 00:26:17.502 "max_latency_us": 15728.64 00:26:17.502 } 00:26:17.502 ], 00:26:17.502 "core_count": 1 00:26:17.502 } 00:26:17.502 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:17.502 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:17.502 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:17.502 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:17.502 | select(.opcode=="crc32c") 00:26:17.502 | "\(.module_name) \(.executed)"' 00:26:17.502 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.760 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3058929 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3058929 ']' 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3058929 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058929 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058929' 00:26:17.761 killing process with pid 3058929 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3058929 00:26:17.761 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.761 00:26:17.761 Latency(us) 00:26:17.761 [2024-11-20T08:58:41.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.761 [2024-11-20T08:58:41.093Z] =================================================================================================================== 00:26:17.761 [2024-11-20T08:58:41.093Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.761 09:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3058929 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3059403 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3059403 /var/tmp/bperf.sock 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3059403 ']' 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.020 [2024-11-20 09:58:41.183253] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:18.020 [2024-11-20 09:58:41.183313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059403 ] 00:26:18.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.020 Zero copy mechanism will not be used. 00:26:18.020 [2024-11-20 09:58:41.257439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.020 [2024-11-20 09:58:41.300083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:18.020 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:18.279 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.279 09:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.849 nvme0n1 00:26:18.849 09:58:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:18.849 09:58:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.849 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.849 Zero copy mechanism will not be used. 00:26:18.849 Running I/O for 2 seconds... 00:26:21.162 6677.00 IOPS, 834.62 MiB/s [2024-11-20T08:58:44.494Z] 6346.50 IOPS, 793.31 MiB/s 00:26:21.162 Latency(us) 00:26:21.162 [2024-11-20T08:58:44.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.162 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:21.162 nvme0n1 : 2.00 6341.16 792.65 0.00 0.00 2518.12 1880.60 9744.92 00:26:21.162 [2024-11-20T08:58:44.494Z] =================================================================================================================== 00:26:21.162 [2024-11-20T08:58:44.494Z] Total : 6341.16 792.65 0.00 0.00 2518.12 1880.60 9744.92 00:26:21.162 { 00:26:21.162 "results": [ 00:26:21.162 { 00:26:21.162 "job": "nvme0n1", 00:26:21.162 "core_mask": "0x2", 00:26:21.162 "workload": "randwrite", 00:26:21.162 "status": "finished", 00:26:21.162 "queue_depth": 16, 00:26:21.162 "io_size": 131072, 00:26:21.162 "runtime": 2.004838, 00:26:21.162 "iops": 6341.160732188835, 00:26:21.162 "mibps": 792.6450915236044, 00:26:21.162 "io_failed": 0, 00:26:21.162 "io_timeout": 0, 00:26:21.162 "avg_latency_us": 2518.1162255684867, 00:26:21.162 "min_latency_us": 1880.5982608695651, 00:26:21.162 "max_latency_us": 9744.918260869565 00:26:21.162 } 00:26:21.162 ], 00:26:21.162 "core_count": 1 00:26:21.162 } 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:21.162 | select(.opcode=="crc32c") 00:26:21.162 | "\(.module_name) \(.executed)"' 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3059403 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3059403 ']' 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3059403 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3059403 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3059403' 00:26:21.162 killing process with pid 3059403 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3059403 00:26:21.162 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.162 00:26:21.162 Latency(us) 00:26:21.162 [2024-11-20T08:58:44.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.162 [2024-11-20T08:58:44.494Z] =================================================================================================================== 00:26:21.162 [2024-11-20T08:58:44.494Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.162 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3059403 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3057744 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3057744 ']' 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3057744 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057744 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057744' 00:26:21.420 killing process with pid 3057744 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3057744 00:26:21.420 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3057744 00:26:21.678 00:26:21.678 real 0m14.249s 00:26:21.678 user 0m27.376s 00:26:21.678 sys 0m4.481s 00:26:21.678 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.678 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.678 ************************************ 00:26:21.678 END TEST nvmf_digest_clean 00:26:21.678 ************************************ 00:26:21.678 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:21.678 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.679 ************************************ 00:26:21.679 START TEST nvmf_digest_error 00:26:21.679 ************************************ 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3060120 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3060120 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3060120 ']' 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.679 09:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:21.679 [2024-11-20 09:58:44.879064] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:21.679 [2024-11-20 09:58:44.879104] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.679 [2024-11-20 09:58:44.943059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.679 [2024-11-20 09:58:44.984717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.679 [2024-11-20 09:58:44.984755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.679 [2024-11-20 09:58:44.984762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.679 [2024-11-20 09:58:44.984769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.679 [2024-11-20 09:58:44.984774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.679 [2024-11-20 09:58:44.985357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.937 [2024-11-20 09:58:45.073845] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.937 null0 00:26:21.937 [2024-11-20 09:58:45.167989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.937 [2024-11-20 09:58:45.192192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3060143 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3060143 /var/tmp/bperf.sock 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3060143 ']' 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.937 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.937 [2024-11-20 09:58:45.246129] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:21.937 [2024-11-20 09:58:45.246170] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060143 ] 00:26:22.195 [2024-11-20 09:58:45.322579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.195 [2024-11-20 09:58:45.365334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.195 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.195 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:22.195 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:22.195 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:22.453 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:22.453 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.453 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.453 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.453 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.453 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.712 nvme0n1 00:26:22.712 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:22.712 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.712 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.712 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.713 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:22.713 09:58:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:22.972 Running I/O for 2 seconds... 00:26:22.972 [2024-11-20 09:58:46.075041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.972 [2024-11-20 09:58:46.075080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.972 [2024-11-20 09:58:46.075091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.972 [2024-11-20 09:58:46.084480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.972 [2024-11-20 09:58:46.084504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.972 [2024-11-20 09:58:46.084514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.972 [2024-11-20 09:58:46.093385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.972 [2024-11-20 09:58:46.093409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.972 [2024-11-20 09:58:46.093418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.972 [2024-11-20 09:58:46.102806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.972 [2024-11-20 09:58:46.102828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.972 [2024-11-20 09:58:46.102837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.972 [2024-11-20 09:58:46.112199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.972 [2024-11-20 09:58:46.112222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.972 [2024-11-20 09:58:46.112230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.972 [2024-11-20 09:58:46.122195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.972 [2024-11-20 09:58:46.122217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.972 [2024-11-20 09:58:46.122225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.972 [2024-11-20 09:58:46.131888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.972 [2024-11-20 09:58:46.131910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.972 [2024-11-20 09:58:46.131919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.972 [2024-11-20 09:58:46.142067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.972 [2024-11-20 09:58:46.142090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.142098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.150816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.150839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.150847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.163541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.163563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.163575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.173093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.173115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.173124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.181197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.181218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.181227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.191153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.191175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.191183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.200019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.200040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.200048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.211876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.211899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.211907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.220966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.220987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.220996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.233169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.233190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.233199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.241681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.241703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.241712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.253888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.253914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.253922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.262206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.262234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.262242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.274138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.274160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.274168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.282696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.282718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.282726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.973 [2024-11-20 09:58:46.293250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:22.973 [2024-11-20 09:58:46.293270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.973 [2024-11-20 09:58:46.293279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.305545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.305567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.305576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.315492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.315514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.315523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.327286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.327307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.327316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.335967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.335988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.336000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.347075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.347096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.347104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.358520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.358542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.358550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.371025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.371046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.371055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.384094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.384115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.384123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.395486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.395506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.395515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.408254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.408276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.408284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.416783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.416807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.416815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.428542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.428564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.428573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.435944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.435977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.435985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.445997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.446019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.446028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.457155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.457177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.457185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.469772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.469793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.469801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.478934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.478961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.478969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.490300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.490322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.490330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.502079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.502100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.502109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.512270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.512291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.512300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.525583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.525606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.525614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.534114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.534136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.534144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.233 [2024-11-20 09:58:46.546571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.233 [2024-11-20 09:58:46.546593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.233 [2024-11-20 09:58:46.546601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.234 [2024-11-20 09:58:46.558915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.234 [2024-11-20 09:58:46.558936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.234 [2024-11-20 09:58:46.558945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.570555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.570577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.570586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.579693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.579714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.579722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.590804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.590825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.590834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.600586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.600607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.600616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.609930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.609957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.609966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.619591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.619612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.619624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.630115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.630137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.630146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.641853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.641876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.641884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.653090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.653112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.653120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.662268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.662289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.662297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.674281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.674302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.674310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.682993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.683014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.683022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.692954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.692976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.692984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.702116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.702138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.702146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.711805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.711830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.711838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.720819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.720841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.720850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.730265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.730287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.730295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.740507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.740529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.740538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.749966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.749987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.749996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.759507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.759528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.759536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.768263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.768284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.494 [2024-11-20 09:58:46.768292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.494 [2024-11-20 09:58:46.777931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.494 [2024-11-20 09:58:46.777957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-11-20 09:58:46.777966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.495 [2024-11-20 09:58:46.787127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.495 [2024-11-20 09:58:46.787148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-11-20 09:58:46.787159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.495 [2024-11-20 09:58:46.797838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.495 [2024-11-20 09:58:46.797859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-11-20 09:58:46.797867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.495 [2024-11-20 09:58:46.806321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.495 [2024-11-20 09:58:46.806342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-11-20 09:58:46.806350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.495 [2024-11-20 09:58:46.818529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.495 [2024-11-20 09:58:46.818551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.495 [2024-11-20 09:58:46.818559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.754 [2024-11-20 09:58:46.827135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.754 [2024-11-20 09:58:46.827158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.754 [2024-11-20 09:58:46.827167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.754 [2024-11-20 09:58:46.838792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.754 [2024-11-20 09:58:46.838814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.754 [2024-11-20 09:58:46.838823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.754 [2024-11-20 09:58:46.850335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.754 [2024-11-20 09:58:46.850357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.850365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.859207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.859228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.859237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.868967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.869004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.869013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.880817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.880841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.880849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.889503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.889524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.889533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.899826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.899847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.899856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.910336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.910358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.910366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.923828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.923851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.923859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.932172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.932194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.932202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.944555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.944577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.944585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.957542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.957564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.957572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.970738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.970759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.970768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.980940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.980965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.980974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:46.989739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:46.989761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:46.989769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:47.000716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:47.000737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:47.000746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:47.008606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:47.008627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:47.008636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:47.020639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:47.020659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:47.020668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:47.032217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:47.032239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:47.032248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:47.044943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:47.044970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:47.044994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:47.056869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:47.056891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:47.056900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 24362.00 IOPS, 95.16 MiB/s [2024-11-20T08:58:47.087Z] [2024-11-20 09:58:47.067037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:47.067059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:47.067071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.755 [2024-11-20 09:58:47.079833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:23.755 [2024-11-20 09:58:47.079856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.755 [2024-11-20 09:58:47.079865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.091558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.091580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.091589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.100776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.100797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.100806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.111345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.111367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.111375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.119752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.119772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.119781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.129088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.129110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.129118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.139952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.139974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.139982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.151982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.152004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.152012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.164897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.164919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.164927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.177892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.177914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.177922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.190794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.190816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.190824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.199098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.199120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.199128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.211856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.211878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.211887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.224407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.015 [2024-11-20 09:58:47.224430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.015 [2024-11-20 09:58:47.224438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.015 [2024-11-20 09:58:47.236796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.236819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.236827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 09:58:47.247883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.247906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.247915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 09:58:47.257698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.257720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.257732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 09:58:47.270269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.270291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.270300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 09:58:47.279490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.279513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.279523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 09:58:47.291344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.291367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.291375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 09:58:47.300315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.300336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.300345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 09:58:47.313461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.313483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.313492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 09:58:47.325469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.325491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.325499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.016 [2024-11-20 09:58:47.336818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.016 [2024-11-20 09:58:47.336839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.016 [2024-11-20 09:58:47.336848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.346507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.346530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.346539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.355759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.355786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.355795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.365471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.365494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.365502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.373861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.373883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.373891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.386023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.386044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.386052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.397071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.397093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.397101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.409069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.409090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.409099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.418653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.418676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.418684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.431210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.431232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.431240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.442379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.442401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.442410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.451507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.451528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.451537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.464131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.464153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.464162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.472907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.472929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.472938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.485643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.485665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.485674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.497528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.497550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.497558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.507075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.507097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.507106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.520018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.520041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.520049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.528537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.528559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.528568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.540685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.540707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.540720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.550142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.550164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.276 [2024-11-20 09:58:47.550173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.276 [2024-11-20 09:58:47.559861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.276 [2024-11-20 09:58:47.559882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.277 [2024-11-20 09:58:47.559891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.277 [2024-11-20 09:58:47.568431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.277 [2024-11-20 09:58:47.568454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.277 [2024-11-20 09:58:47.568462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.277 [2024-11-20 09:58:47.579266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.277 [2024-11-20 09:58:47.579288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.277 [2024-11-20 09:58:47.579296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.277 [2024-11-20 09:58:47.588733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.277 [2024-11-20 09:58:47.588754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.277 [2024-11-20 09:58:47.588763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.277 [2024-11-20 09:58:47.598786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.277 [2024-11-20 09:58:47.598809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.277 [2024-11-20 09:58:47.598817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.536 [2024-11-20 09:58:47.607333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.536 [2024-11-20 09:58:47.607356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.536 [2024-11-20 09:58:47.607365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.536 [2024-11-20 09:58:47.617946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.536 [2024-11-20 09:58:47.617974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.536 [2024-11-20 09:58:47.617983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.536 [2024-11-20 09:58:47.628189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.536 [2024-11-20 09:58:47.628211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.536 [2024-11-20 09:58:47.628220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.536 [2024-11-20 09:58:47.637171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.536 [2024-11-20 09:58:47.637193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.536 [2024-11-20 09:58:47.637201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.536 [2024-11-20 09:58:47.646563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.536 [2024-11-20 09:58:47.646584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.536 [2024-11-20 09:58:47.646594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.536 [2024-11-20 09:58:47.656277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.536 [2024-11-20 09:58:47.656298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.536 [2024-11-20 09:58:47.656307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.665872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.665894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.665903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.676827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.676849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.676857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.686486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.686508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.686516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.695286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.695307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.695316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.705462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.705482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.705494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.714746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.714767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.714776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.723564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.723586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.723594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.735036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.735059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.735068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.744279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.744301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.744309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.753614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.753636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.753644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.764531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.764552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.764560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.772877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.772899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.772908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.785359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.785381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.785389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.797869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.797898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.797906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.810288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.810309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.810318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.823248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.823270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.823278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.831682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.831705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.831714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.843974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.843998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.844006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.537 [2024-11-20 09:58:47.857314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.537 [2024-11-20 09:58:47.857337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.537 [2024-11-20 09:58:47.857345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.797 [2024-11-20 09:58:47.867466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.797 [2024-11-20 09:58:47.867490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.797 [2024-11-20 09:58:47.867499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.797 [2024-11-20 09:58:47.878055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.797 [2024-11-20 09:58:47.878088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.797 [2024-11-20 09:58:47.878097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.797 [2024-11-20 09:58:47.887066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.797 [2024-11-20 09:58:47.887088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.797 [2024-11-20 09:58:47.887097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.797 [2024-11-20 09:58:47.896256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.797 [2024-11-20 09:58:47.896277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.797 [2024-11-20 09:58:47.896285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.797 [2024-11-20 09:58:47.906637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.797 [2024-11-20 09:58:47.906658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.797 [2024-11-20 09:58:47.906667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.797 [2024-11-20 09:58:47.917567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.797 [2024-11-20 09:58:47.917589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.797 [2024-11-20 09:58:47.917597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.797 [2024-11-20 09:58:47.926237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.797 [2024-11-20 09:58:47.926259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.797 [2024-11-20 09:58:47.926267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:47.936174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:47.936196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:47.936204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:47.945959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:47.945982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:47.945990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:47.956118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:47.956140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:47.956149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:47.965035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:47.965056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:47.965065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:47.974957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:47.974979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:47.975007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:47.984194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:47.984216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:47.984224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:47.993766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:47.993788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:47.993797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:48.003932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:48.003959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:48.003969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:48.013381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:48.013402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:48.013411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:48.022332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:48.022352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:48.022361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:48.031298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:48.031319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:48.031328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:48.041045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:48.041067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:48.041075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:48.050114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:48.050135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:48.050144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 [2024-11-20 09:58:48.060219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:48.060244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:48.060253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 24408.50 IOPS, 95.35 MiB/s [2024-11-20T08:58:48.130Z] [2024-11-20 09:58:48.071242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23b3370) 00:26:24.798 [2024-11-20 09:58:48.071264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.798 [2024-11-20 09:58:48.071272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.798 00:26:24.798 Latency(us) 00:26:24.798 [2024-11-20T08:58:48.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.798 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:24.798 nvme0n1 : 2.01 24428.53 95.42 0.00 0.00 5231.98 2778.16 20629.59 00:26:24.798 [2024-11-20T08:58:48.130Z] =================================================================================================================== 00:26:24.798 [2024-11-20T08:58:48.130Z] Total : 24428.53 95.42 0.00 0.00 5231.98 2778.16 20629.59 00:26:24.798 { 00:26:24.798 "results": [ 00:26:24.798 { 00:26:24.798 "job": "nvme0n1", 00:26:24.798 "core_mask": "0x2", 00:26:24.798 "workload": "randread", 00:26:24.798 "status": "finished", 00:26:24.798 "queue_depth": 128, 00:26:24.798 "io_size": 4096, 00:26:24.798 "runtime": 2.006138, 00:26:24.798 "iops": 24428.52884497477, 00:26:24.798 "mibps": 95.4239408006827, 00:26:24.798 "io_failed": 0, 00:26:24.798 "io_timeout": 0, 00:26:24.798 "avg_latency_us": 5231.978565049713, 00:26:24.798 "min_latency_us": 2778.1565217391303, 00:26:24.798 "max_latency_us": 20629.59304347826 00:26:24.798 } 00:26:24.798 ], 00:26:24.798 "core_count": 1 00:26:24.798 } 00:26:24.798 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:24.798 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:24.798 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:24.798 | .driver_specific 00:26:24.798 | .nvme_error 00:26:24.798 | .status_code 00:26:24.798 | .command_transient_transport_error' 00:26:24.798 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3060143 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3060143 ']' 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3060143 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060143 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060143' 00:26:25.058 killing process with pid 3060143 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3060143 00:26:25.058 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.058 00:26:25.058 Latency(us) 00:26:25.058 [2024-11-20T08:58:48.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.058 [2024-11-20T08:58:48.390Z] =================================================================================================================== 00:26:25.058 [2024-11-20T08:58:48.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.058 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3060143 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3060623 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3060623 /var/tmp/bperf.sock 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3060623 ']' 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.318 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.318 [2024-11-20 09:58:48.550665] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:25.318 [2024-11-20 09:58:48.550712] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060623 ] 00:26:25.318 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.318 Zero copy mechanism will not be used. 00:26:25.318 [2024-11-20 09:58:48.626306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.577 [2024-11-20 09:58:48.664041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.577 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.577 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:25.577 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.577 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.836 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:25.836 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.836 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.836 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.836 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.836 09:58:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.094 nvme0n1 00:26:26.094 09:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:26.094 09:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.094 09:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.094 09:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.094 09:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:26.094 09:58:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:26.094 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.094 Zero copy mechanism will not be used. 00:26:26.094 Running I/O for 2 seconds... 00:26:26.094 [2024-11-20 09:58:49.348490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.348529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.348540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.354074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.354102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.354113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.359463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.359489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.359498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.364784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.364808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.364816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.370100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.370123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.370132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.375355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.375378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.375390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.381323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.381346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.381354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.388655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.388678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.388686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.394205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.394227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.394235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.399507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.399530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.399539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.404820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.404843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.404851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.410113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.410135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.410143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.415391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.415413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.415421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.094 [2024-11-20 09:58:49.420586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.094 [2024-11-20 09:58:49.420609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.094 [2024-11-20 09:58:49.420617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.425849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.425877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.425885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.431143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.431166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.431174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.436339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.436362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.436370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.441559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.441580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.441589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.446866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.446888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.446897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.452164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.452186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.452194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.457496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.457520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.457528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.463583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.463606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.463615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.469042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.469066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.469074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.474364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.474387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.474396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.479654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.479677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.479686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.484935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.484965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.484974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.490186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.490210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.353 [2024-11-20 09:58:49.490218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.353 [2024-11-20 09:58:49.495450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.353 [2024-11-20 09:58:49.495472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.495481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.500676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.500698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.500706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.505884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.505913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.505921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.511183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.511205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.511214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.516566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.516588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.516600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.521860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.521882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.521890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.527123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.527146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.527154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.532419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.532442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.532450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.537699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.537721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.537730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.542937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.542967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.542975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.548138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.548161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.548169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.553368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.553390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.553398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.558733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.558757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.558765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.564051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.564080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.564089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.569351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.569373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.569382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.574579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.574600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.574609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.579861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.579882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.579891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.585062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.585085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.585093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.590273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.590295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.590304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.595514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.595535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.595543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.600760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.600784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.600793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.606039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.606074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.606083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.611359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.354 [2024-11-20 09:58:49.611382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.354 [2024-11-20 09:58:49.611391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.354 [2024-11-20 09:58:49.616640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.616664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.616672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.621898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.621921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.621930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.627128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.627151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.627159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.633202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.633225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.633234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.638597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.638620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.638629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.643817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.643839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.643848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.649125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.649147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.649156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.654499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.654520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.654532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.659749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.659771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.659780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.664971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.664994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.665002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.670202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.670224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.670232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.675392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.675414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.675422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.355 [2024-11-20 09:58:49.680608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.355 [2024-11-20 09:58:49.680631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.355 [2024-11-20 09:58:49.680639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.615 [2024-11-20 09:58:49.685883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.615 [2024-11-20 09:58:49.685907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.615 [2024-11-20 09:58:49.685915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.615 [2024-11-20 09:58:49.691154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.615 [2024-11-20 09:58:49.691175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.615 [2024-11-20 09:58:49.691183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.615 [2024-11-20 09:58:49.696341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.615 [2024-11-20 09:58:49.696363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.615 [2024-11-20 09:58:49.696372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.615 [2024-11-20 09:58:49.701546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.615 [2024-11-20 09:58:49.701569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.615 [2024-11-20 09:58:49.701577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.615 [2024-11-20 09:58:49.706697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.615 [2024-11-20 09:58:49.706719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.615 [2024-11-20 09:58:49.706727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.615 [2024-11-20 09:58:49.711976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.711998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.712006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.717180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.717203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.717210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.722454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.722476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.722484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.727710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.727732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.727740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.732991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.733014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.733022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.738171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.738193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.738201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.743371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.743392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.743404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.748635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.748658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.748666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.753902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.753924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.753932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.759114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.759135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.759143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.764298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.764319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.764327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.769614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.769636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.769644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.774859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.774881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.774890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.780162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.780183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.780191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.785391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.785414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.785422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.790593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.790619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.790627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.796770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.796794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.796803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.804383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.804407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.804415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.810979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.811003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.811011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.817234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.817258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.817266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.823316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.823339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.823348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.826423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.826445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.826454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.831535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.831559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.831568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.836584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.836607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.836615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.842209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.842232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.842241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.848059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.848081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.848089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.616 [2024-11-20 09:58:49.853862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.616 [2024-11-20 09:58:49.853885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.616 [2024-11-20 09:58:49.853896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.858730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.858754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.858763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.863374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.863396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.863406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.866514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.866546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.866554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.871634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.871656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.871664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.876874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.876895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.876903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.881910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.881932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.881945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.887013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.887035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.887044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.891957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.891979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.891987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.897066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.897088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.897096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.902208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.902231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.902239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.907268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.907291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.907299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.912207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.912229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.912238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.917570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.917592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.917600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.922858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.922879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.922888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.928130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.928156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.928164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.934485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.934508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.934517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.617 [2024-11-20 09:58:49.941870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.617 [2024-11-20 09:58:49.941893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.617 [2024-11-20 09:58:49.941901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:49.948509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:49.948534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:49.948542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:49.955999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:49.956022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:49.956031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:49.962994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:49.963018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:49.963027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:49.971034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:49.971057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:49.971066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:49.979194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:49.979218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:49.979227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:49.987164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:49.987187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:49.987196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:49.995401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:49.995425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:49.995434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:50.004154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:50.004178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:50.004187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:50.012669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:50.012693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:50.012702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:50.021440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:50.021466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.878 [2024-11-20 09:58:50.021475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.878 [2024-11-20 09:58:50.029890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.878 [2024-11-20 09:58:50.029916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.029925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.037957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.037984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.037996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.046252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.046276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.046285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.054615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.054639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.054648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.061841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.061865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.061878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.068264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.068289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.068298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.075345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.075368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.075377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.081519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.081542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.081551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.086891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.086913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.086922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.092122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.092144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.092153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.098434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.098457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.098465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.104823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.104847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.104857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.112116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.112141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.112151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.119560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.119589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.119598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.126684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.126711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.126721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.133613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.133638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.133648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.137665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.137687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.137697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.144521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.144545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.144556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.152040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.152064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.152073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.159155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.159180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.159189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.166092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.166117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.166126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.172647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.172672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.172681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.180135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.180160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.180170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.187999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.188024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.188033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.194351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.194375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.194384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.199850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.199874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.199883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:26.879 [2024-11-20 09:58:50.205227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:26.879 [2024-11-20 09:58:50.205250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.879 [2024-11-20 09:58:50.205258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.210405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.210429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.210437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.215621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.215644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.215653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.220914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.220937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.220955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.226123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.226146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.226159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.231496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.231520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.231528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.236881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.236904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.236913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.242213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.242236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.242244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.247549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.247572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.247580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.252887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.252910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.252918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.258220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.258243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.258252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.263522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.263544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.263553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.268821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.268843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.268851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.274103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.274126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.274134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.279385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.279406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.279415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.284676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.284698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.284706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.289983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.290006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.290015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.295320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.295342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.295351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.300649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.300671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.300679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.306008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.306030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.306039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.311317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.311340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.311350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.316693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.141 [2024-11-20 09:58:50.316716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.141 [2024-11-20 09:58:50.316732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.141 [2024-11-20 09:58:50.322108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.322130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.322139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.327433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.327455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.327463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.332709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.332732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.332741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.338121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.338144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.338152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.343599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.343623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.343632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.142 5414.00 IOPS, 676.75 MiB/s [2024-11-20T08:58:50.474Z] [2024-11-20 09:58:50.349753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.349776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.349784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.355305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.355329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.355338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.360859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.360881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.360889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.366516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.366543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.366551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.372020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.372042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.372050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.377548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.377570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.377579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.382922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.382945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.382960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.388280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.388302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.388310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.393588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.393611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.393619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.399044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.399066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.399074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.404658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.404680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.404689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.410297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.410319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.410327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.415884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.415906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.415915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.421464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.421486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.421494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.427015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.427037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.427045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.432530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.432552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.432560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.438083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.438118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.438137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.443518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.443540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.443548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.448804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.448826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.448834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.454082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.454104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.454112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.459427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.459449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.459462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.142 [2024-11-20 09:58:50.464885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.142 [2024-11-20 09:58:50.464907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.142 [2024-11-20 09:58:50.464915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.470334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.470358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.470367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.476036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.476059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.476067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.483493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.483516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.483524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.489920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.489943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.489957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.494564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.494587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.494595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.499234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.499256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.499265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.503872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.503895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.503903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.508575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.508602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.508612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.513490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.513511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.513520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.518398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.518421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.518430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.523745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.523768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.523776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.529240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.529264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.529272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.534682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.534705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.534713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.540190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.540213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.540222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.545688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.545711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.405 [2024-11-20 09:58:50.545720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.405 [2024-11-20 09:58:50.551200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.405 [2024-11-20 09:58:50.551224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.551233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.556720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.556743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.556751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.562230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.562252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.562260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.567821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.567843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.567851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.573250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.573272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.573281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.578783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.578805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.578813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.584261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.584283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.584291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.589720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.589741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.589749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.595547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.595569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.595577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.601120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.601142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.601155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.606614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.606637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.606646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.612038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.612061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.612070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.617487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.617509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.617518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.623130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.623153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.623161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.628707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.628730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.628738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.634110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.634134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.634143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.639578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.639601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.639609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.645069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.645090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.645099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.650629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.650650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.650658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.406 [2024-11-20 09:58:50.656285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.406 [2024-11-20 09:58:50.656307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.406 [2024-11-20 09:58:50.656316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.661852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.661874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.661882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.667429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.667450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.667458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.672839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.672861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.672869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.678781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.678802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.678810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.684133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.684154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.684162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.689426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.689448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.689456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.694816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.694838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.694850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.700137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.700158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.700167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.705414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.705436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.705444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.710796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.710818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.710826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.716610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.716633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.716641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.722199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.722222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.722230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.727823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.407 [2024-11-20 09:58:50.727847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.407 [2024-11-20 09:58:50.727856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.407 [2024-11-20 09:58:50.733562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.733584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.733593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.739196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.739219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.739228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.744572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.744598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.744606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.749932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.749961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.749969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.755450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.755473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.755481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.760960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.760982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.760990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.766445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.766467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.766475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.771860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.771882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.771890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.777491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.777514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.777522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.782942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.782970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.782978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.788364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.788385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.788393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.793769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.793791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.793799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.799108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.799130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.799138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.804535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.804556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.804564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.809957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.809978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.809986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.815333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.815355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.815363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.820788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.820809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.820817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.668 [2024-11-20 09:58:50.826039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.668 [2024-11-20 09:58:50.826060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.668 [2024-11-20 09:58:50.826069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.831436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.831459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.831468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.836901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.836924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.836935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.842273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.842296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.842304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.847675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.847697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.847705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.853162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.853184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.853193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.858742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.858764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.858773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.864187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.864209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.864218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.869517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.869537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.869546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.874804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.874826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.874834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.880225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.880247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.880255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.885660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.885683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.885692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.891090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.891112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.891121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.896797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.896819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.896828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.902179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.902201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.902209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.907519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.907542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.907550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.912914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.912937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.912945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.918279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.918302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.918310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.923611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.923632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.923640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.929003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.929025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.929033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.934413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.934435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.934444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.939742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.939764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.939772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.944932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.944961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.944970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.950294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.950316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.950324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.955609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.955631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.955639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.961122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.961145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.961154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.966617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.966639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.966648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.669 [2024-11-20 09:58:50.972025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.669 [2024-11-20 09:58:50.972047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.669 [2024-11-20 09:58:50.972055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.670 [2024-11-20 09:58:50.977548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.670 [2024-11-20 09:58:50.977571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.670 [2024-11-20 09:58:50.977583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.670 [2024-11-20 09:58:50.983060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.670 [2024-11-20 09:58:50.983082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.670 [2024-11-20 09:58:50.983090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.670 [2024-11-20 09:58:50.988507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.670 [2024-11-20 09:58:50.988529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.670 [2024-11-20 09:58:50.988537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.670 [2024-11-20 09:58:50.993893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.670 [2024-11-20 09:58:50.993914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.670 [2024-11-20 09:58:50.993922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.930 [2024-11-20 09:58:50.999462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.930 [2024-11-20 09:58:50.999484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.930 [2024-11-20 09:58:50.999492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.930 [2024-11-20 09:58:51.004974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.930 [2024-11-20 09:58:51.004996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.930 [2024-11-20 09:58:51.005004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.930 [2024-11-20 09:58:51.010594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.930 [2024-11-20 09:58:51.010617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.930 [2024-11-20 09:58:51.010625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.930 [2024-11-20 09:58:51.016144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.930 [2024-11-20 09:58:51.016167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.930 [2024-11-20 09:58:51.016175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.930 [2024-11-20 09:58:51.021633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.930 [2024-11-20 09:58:51.021655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.930 [2024-11-20 09:58:51.021664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.930 [2024-11-20 09:58:51.027189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.930 [2024-11-20 09:58:51.027211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.930 [2024-11-20 09:58:51.027219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.930 [2024-11-20 09:58:51.032723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.930 [2024-11-20 09:58:51.032746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.930 [2024-11-20 09:58:51.032755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.930 [2024-11-20 09:58:51.038111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.930 [2024-11-20 09:58:51.038133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.930 [2024-11-20 09:58:51.038142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.043532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.043554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.043563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.048936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.048965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.048973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.054276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.054299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.054307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.060668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.060690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.060699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.068529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.068552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.068561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.075671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.075694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.075706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.082026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.082049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.082057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.088315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.088337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.088346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.094235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.094258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.094267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.101333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.101365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.108660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.108683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.108692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.115541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.115564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.115574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.124321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.124343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.124352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.131215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.131238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.131247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.139022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.139050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.139058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.146699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.146723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.146732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.154219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.154243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.154252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.161671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.161695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.161703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.167897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.167919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.167928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.174748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.174772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.174781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.181816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.181840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.181849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.188293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.188316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.188325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.196082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.196105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.196114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.203649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.203672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.203681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.210570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.210593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.210602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.217574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.217596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.217605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.931 [2024-11-20 09:58:51.224177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.931 [2024-11-20 09:58:51.224201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.931 [2024-11-20 09:58:51.224210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.932 [2024-11-20 09:58:51.229748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.932 [2024-11-20 09:58:51.229771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.932 [2024-11-20 09:58:51.229779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.932 [2024-11-20 09:58:51.235306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.932 [2024-11-20 09:58:51.235329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.932 [2024-11-20 09:58:51.235338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.932 [2024-11-20 09:58:51.240339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.932 [2024-11-20 09:58:51.240362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.932 [2024-11-20 09:58:51.240370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.932 [2024-11-20 09:58:51.245654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.932 [2024-11-20 09:58:51.245678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.932 [2024-11-20 09:58:51.245687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.932 [2024-11-20 09:58:51.251033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.932 [2024-11-20 09:58:51.251056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.932 [2024-11-20 09:58:51.251068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.932 [2024-11-20 09:58:51.256330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:27.932 [2024-11-20 09:58:51.256353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.932 [2024-11-20 09:58:51.256361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.261569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.261592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.261601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.266897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.266919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.266927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.272451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.272475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.272483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.278097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.278119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.278128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.283669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.283690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.283699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.286646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.286668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.286676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.292176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.292198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.292206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.297619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.297646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.297654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.303436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.303460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.303468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.308897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.308919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.308928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.314318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.314339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.314348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.319800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.319822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.319831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.325156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.325177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.325185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.330474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.330496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.330505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.335872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.335895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.335903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.341748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.341772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.341781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:28.192 [2024-11-20 09:58:51.347382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x866580) 00:26:28.192 [2024-11-20 09:58:51.347405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.192 [2024-11-20 09:58:51.347413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:28.192 5441.50 IOPS, 680.19 MiB/s 00:26:28.192 Latency(us) 00:26:28.192 [2024-11-20T08:58:51.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.192 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:28.192 nvme0n1 : 2.00 5442.34 680.29 0.00 0.00 2937.18 676.73 8833.11 00:26:28.192 [2024-11-20T08:58:51.524Z] =================================================================================================================== 00:26:28.192 [2024-11-20T08:58:51.524Z] Total : 5442.34 680.29 0.00 0.00 2937.18 676.73 8833.11 00:26:28.192 { 00:26:28.192 "results": [ 00:26:28.192 { 00:26:28.192 "job": "nvme0n1", 00:26:28.192 "core_mask": "0x2", 00:26:28.192 "workload": "randread", 00:26:28.192 "status": "finished", 00:26:28.192 "queue_depth": 16, 00:26:28.192 "io_size": 131072, 00:26:28.192 "runtime": 2.002633, 00:26:28.192 "iops": 5442.335165754284, 00:26:28.192 "mibps": 680.2918957192855, 00:26:28.192 "io_failed": 0, 00:26:28.192 "io_timeout": 0, 00:26:28.192 "avg_latency_us": 2937.181687031519, 00:26:28.192 "min_latency_us": 676.7304347826087, 00:26:28.192 "max_latency_us": 8833.11304347826 00:26:28.192 } 00:26:28.192 ], 00:26:28.192 "core_count": 1 00:26:28.192 } 00:26:28.192 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:28.192 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:28.193 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:28.193 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:28.193 | .driver_specific 00:26:28.193 | .nvme_error 00:26:28.193 | .status_code 00:26:28.193 | .command_transient_transport_error' 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 352 > 0 )) 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3060623 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3060623 ']' 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3060623 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060623 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060623' 00:26:28.452 killing process with pid 3060623 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3060623 00:26:28.452 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.452 00:26:28.452 Latency(us) 00:26:28.452 [2024-11-20T08:58:51.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.452 [2024-11-20T08:58:51.784Z] =================================================================================================================== 00:26:28.452 [2024-11-20T08:58:51.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.452 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3060623 00:26:28.712 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:28.712 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.712 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:28.712 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:28.712 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:28.712 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3061203 00:26:28.712 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3061203 /var/tmp/bperf.sock 00:26:28.712 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:28.713 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3061203 ']' 00:26:28.713 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.713 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.713 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.713 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.713 09:58:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.713 [2024-11-20 09:58:51.828354] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:28.713 [2024-11-20 09:58:51.828402] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061203 ] 00:26:28.713 [2024-11-20 09:58:51.902851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.713 [2024-11-20 09:58:51.945479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.713 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.713 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:28.713 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.713 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.972 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:28.972 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.972 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.972 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.972 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.972 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.416 nvme0n1 00:26:29.416 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:29.416 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.416 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.416 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.416 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.416 09:58:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.416 Running I/O for 2 seconds... 00:26:29.416 [2024-11-20 09:58:52.682155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e1f80 00:26:29.416 [2024-11-20 09:58:52.683142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.416 [2024-11-20 09:58:52.683173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:29.416 [2024-11-20 09:58:52.691675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e23b8 00:26:29.416 [2024-11-20 09:58:52.692737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.416 [2024-11-20 09:58:52.692759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:29.416 [2024-11-20 09:58:52.701089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f35f0 00:26:29.416 [2024-11-20 09:58:52.702136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.416 [2024-11-20 09:58:52.702156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:29.416 [2024-11-20 09:58:52.710278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e5a90 00:26:29.417 [2024-11-20 09:58:52.710917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.417 [2024-11-20 09:58:52.710939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.417 [2024-11-20 09:58:52.720043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fe2e8 00:26:29.417 [2024-11-20 09:58:52.721027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.417 [2024-11-20 09:58:52.721049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.729494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166de038 00:26:29.687 [2024-11-20 09:58:52.730484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.730505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.738988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f31b8 00:26:29.687 [2024-11-20 09:58:52.739969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.739990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.748327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f20d8 00:26:29.687 [2024-11-20 09:58:52.749276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.749295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.757117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fa7d8 00:26:29.687 [2024-11-20 09:58:52.758012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.758031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.768167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ee190 00:26:29.687 [2024-11-20 09:58:52.769630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.769650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.776787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6020 00:26:29.687 [2024-11-20 09:58:52.777874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.777894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.785212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f8618 00:26:29.687 [2024-11-20 09:58:52.786248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.786267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.793756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166edd58 00:26:29.687 [2024-11-20 09:58:52.794473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.794493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.802859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb8b8 00:26:29.687 [2024-11-20 09:58:52.803569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.803589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.812071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e9e10 00:26:29.687 [2024-11-20 09:58:52.812771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.812791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.821347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e8d30 00:26:29.687 [2024-11-20 09:58:52.822058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.822080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.830520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e7c50 00:26:29.687 [2024-11-20 09:58:52.831232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.831251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.840921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ec408 00:26:29.687 [2024-11-20 09:58:52.842110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.842130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.850249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ee190 00:26:29.687 [2024-11-20 09:58:52.850973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.850994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.859645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ebb98 00:26:29.687 [2024-11-20 09:58:52.860702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.860722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.868833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fac10 00:26:29.687 [2024-11-20 09:58:52.869884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.869903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.878032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6020 00:26:29.687 [2024-11-20 09:58:52.879080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.879099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.887380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f4f40 00:26:29.687 [2024-11-20 09:58:52.888461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.888479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.896834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e23b8 00:26:29.687 [2024-11-20 09:58:52.897983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.898002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.905560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e5ec8 00:26:29.687 [2024-11-20 09:58:52.906711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.906730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.915202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ef6a8 00:26:29.687 [2024-11-20 09:58:52.916466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.916486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.924898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ff3c8 00:26:29.687 [2024-11-20 09:58:52.926290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.926309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.933451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fe2e8 00:26:29.687 [2024-11-20 09:58:52.934550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.934570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.942084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fa3a0 00:26:29.687 [2024-11-20 09:58:52.943412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.943432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.951414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f81e0 00:26:29.687 [2024-11-20 09:58:52.952381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.687 [2024-11-20 09:58:52.952400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:29.687 [2024-11-20 09:58:52.960554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e99d8 00:26:29.687 [2024-11-20 09:58:52.961514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.688 [2024-11-20 09:58:52.961534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:29.688 [2024-11-20 09:58:52.970076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f7538 00:26:29.688 [2024-11-20 09:58:52.971029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.688 [2024-11-20 09:58:52.971049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.688 [2024-11-20 09:58:52.979257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed920 00:26:29.688 [2024-11-20 09:58:52.980196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.688 [2024-11-20 09:58:52.980215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.688 [2024-11-20 09:58:52.988459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e7c50 00:26:29.688 [2024-11-20 09:58:52.989332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.688 [2024-11-20 09:58:52.989352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.688 [2024-11-20 09:58:52.997634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ec408 00:26:29.688 [2024-11-20 09:58:52.998497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.688 [2024-11-20 09:58:52.998517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:29.688 [2024-11-20 09:58:53.007427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f92c0 00:26:29.688 [2024-11-20 09:58:53.008519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.688 [2024-11-20 09:58:53.008538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.016672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6cc8 00:26:29.948 [2024-11-20 09:58:53.017785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.017805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.024827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e0a68 00:26:29.948 [2024-11-20 09:58:53.025443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.025463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.034008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:29.948 [2024-11-20 09:58:53.034607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.034628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.043171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:29.948 [2024-11-20 09:58:53.043880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.043901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.052481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:29.948 [2024-11-20 09:58:53.053211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.053231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.061778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:29.948 [2024-11-20 09:58:53.062476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.062503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.070973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:29.948 [2024-11-20 09:58:53.071570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.071589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.080135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:29.948 [2024-11-20 09:58:53.080839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.080858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.089392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:29.948 [2024-11-20 09:58:53.090085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.090105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.948 [2024-11-20 09:58:53.098554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:29.948 [2024-11-20 09:58:53.099263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.948 [2024-11-20 09:58:53.099284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.107820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed0b0 00:26:29.949 [2024-11-20 09:58:53.108421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.108441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.116856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed0b0 00:26:29.949 [2024-11-20 09:58:53.117451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.117470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.126126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed0b0 00:26:29.949 [2024-11-20 09:58:53.126795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.126815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.135296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed0b0 00:26:29.949 [2024-11-20 09:58:53.135988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.136008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.144500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed0b0 00:26:29.949 [2024-11-20 09:58:53.145202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.145222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.153672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed0b0 00:26:29.949 [2024-11-20 09:58:53.154345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.154365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.164034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed0b0 00:26:29.949 [2024-11-20 09:58:53.165094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.165113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.171982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eff18 00:26:29.949 [2024-11-20 09:58:53.172565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.172584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.181463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f3a28 00:26:29.949 [2024-11-20 09:58:53.182274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.182293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.190197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ef6a8 00:26:29.949 [2024-11-20 09:58:53.190850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.190870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.200971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f4f40 00:26:29.949 [2024-11-20 09:58:53.201653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.201673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.209619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f1430 00:26:29.949 [2024-11-20 09:58:53.210296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.210316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.218609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e3060 00:26:29.949 [2024-11-20 09:58:53.219445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.219465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.228087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e7818 00:26:29.949 [2024-11-20 09:58:53.228782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.228801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.237367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ee190 00:26:29.949 [2024-11-20 09:58:53.238070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.238091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.246841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fa3a0 00:26:29.949 [2024-11-20 09:58:53.247657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.247676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.255396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fa7d8 00:26:29.949 [2024-11-20 09:58:53.256168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.256188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.264257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f35f0 00:26:29.949 [2024-11-20 09:58:53.265010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.265029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.949 [2024-11-20 09:58:53.275699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ef270 00:26:29.949 [2024-11-20 09:58:53.276848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.949 [2024-11-20 09:58:53.276867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.209 [2024-11-20 09:58:53.283674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f57b0 00:26:30.209 [2024-11-20 09:58:53.284135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.209 [2024-11-20 09:58:53.284154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.209 [2024-11-20 09:58:53.293211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:30.209 [2024-11-20 09:58:53.293900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.209 [2024-11-20 09:58:53.293919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.209 [2024-11-20 09:58:53.302254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:30.209 [2024-11-20 09:58:53.303026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.209 [2024-11-20 09:58:53.303049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.209 [2024-11-20 09:58:53.311776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f5be8 00:26:30.209 [2024-11-20 09:58:53.312691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.312711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.320507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f0788 00:26:30.210 [2024-11-20 09:58:53.321233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.321252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.329693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f92c0 00:26:30.210 [2024-11-20 09:58:53.330489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.330507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.340931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166edd58 00:26:30.210 [2024-11-20 09:58:53.342325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.342346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.347573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6020 00:26:30.210 [2024-11-20 09:58:53.348219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.348239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.357212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f4f40 00:26:30.210 [2024-11-20 09:58:53.358003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.358024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.368629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ec408 00:26:30.210 [2024-11-20 09:58:53.369887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.369907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.378265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fda78 00:26:30.210 [2024-11-20 09:58:53.379649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.379668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.387648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e0a68 00:26:30.210 [2024-11-20 09:58:53.388958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.388979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.395002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f7100 00:26:30.210 [2024-11-20 09:58:53.395927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.395950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.404479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eaab8 00:26:30.210 [2024-11-20 09:58:53.405074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.405094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.414096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fda78 00:26:30.210 [2024-11-20 09:58:53.414808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.414828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.422811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e0630 00:26:30.210 [2024-11-20 09:58:53.424150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.424171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.430703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb8b8 00:26:30.210 [2024-11-20 09:58:53.431295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.431315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.441582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e0ea0 00:26:30.210 [2024-11-20 09:58:53.442215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.442236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.452451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:30.210 [2024-11-20 09:58:53.453837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.453856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.461806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fc128 00:26:30.210 [2024-11-20 09:58:53.463222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.463241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.470811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6458 00:26:30.210 [2024-11-20 09:58:53.472204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.472224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.480170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166de8a8 00:26:30.210 [2024-11-20 09:58:53.481543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.481562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.488138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e6300 00:26:30.210 [2024-11-20 09:58:53.489002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.489022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.496609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f20d8 00:26:30.210 [2024-11-20 09:58:53.497560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.497579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.506860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fd208 00:26:30.210 [2024-11-20 09:58:53.507827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.507848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.516485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e8d30 00:26:30.210 [2024-11-20 09:58:53.517682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.517702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.525703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166dece0 00:26:30.210 [2024-11-20 09:58:53.526997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.527015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.210 [2024-11-20 09:58:53.534485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e99d8 00:26:30.210 [2024-11-20 09:58:53.535487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.210 [2024-11-20 09:58:53.535508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.544045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fda78 00:26:30.471 [2024-11-20 09:58:53.545143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.545166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.555435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f4b08 00:26:30.471 [2024-11-20 09:58:53.557037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.557056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.562186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f9b30 00:26:30.471 [2024-11-20 09:58:53.563049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.563069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.573643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e01f8 00:26:30.471 [2024-11-20 09:58:53.575008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.575027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.582370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166feb58 00:26:30.471 [2024-11-20 09:58:53.583608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.583628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.592030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e6b70 00:26:30.471 [2024-11-20 09:58:53.593130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.593149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.601651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6890 00:26:30.471 [2024-11-20 09:58:53.602840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.602860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.610743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed920 00:26:30.471 [2024-11-20 09:58:53.611675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.611694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.619257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fbcf0 00:26:30.471 [2024-11-20 09:58:53.620333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.620352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.628849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ff3c8 00:26:30.471 [2024-11-20 09:58:53.629899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.471 [2024-11-20 09:58:53.629920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.471 [2024-11-20 09:58:53.640354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f2510 00:26:30.472 [2024-11-20 09:58:53.641854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.641874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.646956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fda78 00:26:30.472 [2024-11-20 09:58:53.647776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.647795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.658250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e88f8 00:26:30.472 [2024-11-20 09:58:53.659536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.659556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.667581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e38d0 00:26:30.472 [2024-11-20 09:58:53.668870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.668889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.472 27388.00 IOPS, 106.98 MiB/s [2024-11-20T08:58:53.804Z] [2024-11-20 09:58:53.675674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166de038 00:26:30.472 [2024-11-20 09:58:53.676997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.677018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.685346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fe2e8 00:26:30.472 [2024-11-20 09:58:53.686537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.686556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.693877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ecc78 00:26:30.472 [2024-11-20 09:58:53.694605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.694624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.703389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ef270 00:26:30.472 [2024-11-20 09:58:53.704314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.704333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.712106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6cc8 00:26:30.472 [2024-11-20 09:58:53.712906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.712926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.720905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e1710 00:26:30.472 [2024-11-20 09:58:53.721617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.721636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.732782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e12d8 00:26:30.472 [2024-11-20 09:58:53.734321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.734341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.739243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e6300 00:26:30.472 [2024-11-20 09:58:53.739924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.739944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.749047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e73e0 00:26:30.472 [2024-11-20 09:58:53.749867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.749886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.758575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e3498 00:26:30.472 [2024-11-20 09:58:53.759166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.759185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.767530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed920 00:26:30.472 [2024-11-20 09:58:53.768421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.768440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.776692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e9168 00:26:30.472 [2024-11-20 09:58:53.777521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.777541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.785422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ebb98 00:26:30.472 [2024-11-20 09:58:53.786245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.786268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.472 [2024-11-20 09:58:53.795109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f8618 00:26:30.472 [2024-11-20 09:58:53.796082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.472 [2024-11-20 09:58:53.796103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.804420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fd640 00:26:30.732 [2024-11-20 09:58:53.805041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.805061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.813755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e27f0 00:26:30.732 [2024-11-20 09:58:53.814233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.814254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.822549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166edd58 00:26:30.732 [2024-11-20 09:58:53.823374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.823394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.833806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ec408 00:26:30.732 [2024-11-20 09:58:53.834999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.835020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.842536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e9168 00:26:30.732 [2024-11-20 09:58:53.843670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.843690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.852109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f92c0 00:26:30.732 [2024-11-20 09:58:53.853398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.853418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.860558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fdeb0 00:26:30.732 [2024-11-20 09:58:53.861852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.861872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.868441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb480 00:26:30.732 [2024-11-20 09:58:53.869153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.869173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.879706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ef6a8 00:26:30.732 [2024-11-20 09:58:53.880789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.880809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.888602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e23b8 00:26:30.732 [2024-11-20 09:58:53.889639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.889659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.897932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166de470 00:26:30.732 [2024-11-20 09:58:53.898910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.898929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.907309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e1b48 00:26:30.732 [2024-11-20 09:58:53.908169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.908189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.916710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f8618 00:26:30.732 [2024-11-20 09:58:53.917771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.917791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.925382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fdeb0 00:26:30.732 [2024-11-20 09:58:53.926661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.926681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.732 [2024-11-20 09:58:53.933257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f7970 00:26:30.732 [2024-11-20 09:58:53.933942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.732 [2024-11-20 09:58:53.933968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:53.943475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f3a28 00:26:30.733 [2024-11-20 09:58:53.944216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:53.944237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:53.952268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e01f8 00:26:30.733 [2024-11-20 09:58:53.953098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:53.953117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:53.963413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ebb98 00:26:30.733 [2024-11-20 09:58:53.964592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:53.964612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:53.973172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f96f8 00:26:30.733 [2024-11-20 09:58:53.974623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:53.974643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:53.981678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f31b8 00:26:30.733 [2024-11-20 09:58:53.982660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:53.982680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:53.992042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6020 00:26:30.733 [2024-11-20 09:58:53.993588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:53.993607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:53.998499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e4de8 00:26:30.733 [2024-11-20 09:58:53.999241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:53.999261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:54.009001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ec840 00:26:30.733 [2024-11-20 09:58:54.010173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:54.010192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:54.018618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6890 00:26:30.733 [2024-11-20 09:58:54.019958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:54.019978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:54.027214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ee190 00:26:30.733 [2024-11-20 09:58:54.028066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:54.028090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:54.035893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eb328 00:26:30.733 [2024-11-20 09:58:54.036823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:54.036843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:54.045296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e5658 00:26:30.733 [2024-11-20 09:58:54.046283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:54.046303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.733 [2024-11-20 09:58:54.056744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166de038 00:26:30.733 [2024-11-20 09:58:54.058234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-20 09:58:54.058253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.063509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f3a28 00:26:30.993 [2024-11-20 09:58:54.064249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.064268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.073806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f7538 00:26:30.993 [2024-11-20 09:58:54.074587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.074606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.083267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e3060 00:26:30.993 [2024-11-20 09:58:54.084252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.084272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.092606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eaef0 00:26:30.993 [2024-11-20 09:58:54.093502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.093522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.102097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fac10 00:26:30.993 [2024-11-20 09:58:54.103233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.103252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.112572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e95a0 00:26:30.993 [2024-11-20 09:58:54.114157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.114176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.119210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e5658 00:26:30.993 [2024-11-20 09:58:54.120118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.120139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.130153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166dece0 00:26:30.993 [2024-11-20 09:58:54.131280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.131300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.139874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e2c28 00:26:30.993 [2024-11-20 09:58:54.141233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.141253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.148245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ff3c8 00:26:30.993 [2024-11-20 09:58:54.149597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.149617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.993 [2024-11-20 09:58:54.156134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fc998 00:26:30.993 [2024-11-20 09:58:54.156886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.993 [2024-11-20 09:58:54.156906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.167517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ec840 00:26:30.994 [2024-11-20 09:58:54.168855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.168883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.177127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ef270 00:26:30.994 [2024-11-20 09:58:54.178583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.178602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.186770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e3d08 00:26:30.994 [2024-11-20 09:58:54.188359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.188378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.193237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fc998 00:26:30.994 [2024-11-20 09:58:54.194012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.194031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.202798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fc560 00:26:30.994 [2024-11-20 09:58:54.203347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.203366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.212314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f9f68 00:26:30.994 [2024-11-20 09:58:54.213093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.213113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.222451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ef270 00:26:30.994 [2024-11-20 09:58:54.223715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.223735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.231465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e6b70 00:26:30.994 [2024-11-20 09:58:54.232667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.232687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.240790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fc560 00:26:30.994 [2024-11-20 09:58:54.241996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.242016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.249577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ef270 00:26:30.994 [2024-11-20 09:58:54.250618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.250638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.258867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e8d30 00:26:30.994 [2024-11-20 09:58:54.260001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.260020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.267492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e5ec8 00:26:30.994 [2024-11-20 09:58:54.268360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.268382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.276766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eea00 00:26:30.994 [2024-11-20 09:58:54.277670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.277689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.286408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f46d0 00:26:30.994 [2024-11-20 09:58:54.287447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.287466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.294934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f7da8 00:26:30.994 [2024-11-20 09:58:54.295509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.295528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.304402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f4f40 00:26:30.994 [2024-11-20 09:58:54.305291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.305312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.994 [2024-11-20 09:58:54.313747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f3a28 00:26:30.994 [2024-11-20 09:58:54.314641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.994 [2024-11-20 09:58:54.314661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:31.253 [2024-11-20 09:58:54.325019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e6fa8 00:26:31.253 [2024-11-20 09:58:54.326413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.253 [2024-11-20 09:58:54.326433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:31.253 [2024-11-20 09:58:54.331715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e5658 00:26:31.253 [2024-11-20 09:58:54.332394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.253 [2024-11-20 09:58:54.332412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:31.253 [2024-11-20 09:58:54.341331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6890 00:26:31.253 [2024-11-20 09:58:54.342118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.253 [2024-11-20 09:58:54.342137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:31.253 [2024-11-20 09:58:54.350653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e4578 00:26:31.253 [2024-11-20 09:58:54.351446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.253 [2024-11-20 09:58:54.351465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:31.253 [2024-11-20 09:58:54.361723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166dfdc0 00:26:31.253 [2024-11-20 09:58:54.362993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.253 [2024-11-20 09:58:54.363012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:31.253 [2024-11-20 09:58:54.371061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f6890 00:26:31.253 [2024-11-20 09:58:54.372343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.253 [2024-11-20 09:58:54.372362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:31.253 [2024-11-20 09:58:54.377817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e23b8 00:26:31.253 [2024-11-20 09:58:54.378504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.378523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.389291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166de8a8 00:26:31.254 [2024-11-20 09:58:54.390576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.390596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.398923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eaab8 00:26:31.254 [2024-11-20 09:58:54.400319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.400338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.408574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f31b8 00:26:31.254 [2024-11-20 09:58:54.410082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.410102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.415040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e23b8 00:26:31.254 [2024-11-20 09:58:54.415779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.415799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.425323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eee38 00:26:31.254 [2024-11-20 09:58:54.426367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.426386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.434292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e3d08 00:26:31.254 [2024-11-20 09:58:54.435243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.435263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.443911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f1430 00:26:31.254 [2024-11-20 09:58:54.444979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.444998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.451230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ec840 00:26:31.254 [2024-11-20 09:58:54.451800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.451819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.463193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e12d8 00:26:31.254 [2024-11-20 09:58:54.464486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.464506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.472718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e6300 00:26:31.254 [2024-11-20 09:58:54.474017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.474036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.480053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eaab8 00:26:31.254 [2024-11-20 09:58:54.480860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.480880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.490269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f1ca0 00:26:31.254 [2024-11-20 09:58:54.491233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.491253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.499161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f0350 00:26:31.254 [2024-11-20 09:58:54.500072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.500093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.509200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f57b0 00:26:31.254 [2024-11-20 09:58:54.510374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.510394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.517796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f1868 00:26:31.254 [2024-11-20 09:58:54.518717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.518738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.527219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ddc00 00:26:31.254 [2024-11-20 09:58:54.528162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.528182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.536333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eaab8 00:26:31.254 [2024-11-20 09:58:54.537046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.537067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.546509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166dfdc0 00:26:31.254 [2024-11-20 09:58:54.547728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.547748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.555817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f3e60 00:26:31.254 [2024-11-20 09:58:54.556994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.557013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.563893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e01f8 00:26:31.254 [2024-11-20 09:58:54.565100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.565120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.571783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166eb760 00:26:31.254 [2024-11-20 09:58:54.572358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.254 [2024-11-20 09:58:54.572377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:31.254 [2024-11-20 09:58:54.583101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166f4f40 00:26:31.513 [2024-11-20 09:58:54.584204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.584226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.592585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ff3c8 00:26:31.513 [2024-11-20 09:58:54.593744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.593767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.601346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e1f80 00:26:31.513 [2024-11-20 09:58:54.602319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.602340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.610532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e6b70 00:26:31.513 [2024-11-20 09:58:54.611392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.611412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.620453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e9e10 00:26:31.513 [2024-11-20 09:58:54.621749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.621770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.630114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ff3c8 00:26:31.513 [2024-11-20 09:58:54.631228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.631247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.637937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb8b8 00:26:31.513 [2024-11-20 09:58:54.638676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.638696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.649228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166fb8b8 00:26:31.513 [2024-11-20 09:58:54.650518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.650539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.659136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166ed4e8 00:26:31.513 [2024-11-20 09:58:54.660480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.660500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.665877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e3060 00:26:31.513 [2024-11-20 09:58:54.666492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.666512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:31.513 [2024-11-20 09:58:54.675378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212640) with pdu=0x2000166e95a0 00:26:31.513 [2024-11-20 09:58:54.676479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.513 [2024-11-20 09:58:54.676501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:31.513 27479.00 IOPS, 107.34 MiB/s 00:26:31.513 Latency(us) 00:26:31.513 [2024-11-20T08:58:54.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.513 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:31.513 nvme0n1 : 2.01 27503.89 107.44 0.00 0.00 4648.30 1809.36 12480.33 00:26:31.513 [2024-11-20T08:58:54.845Z] =================================================================================================================== 00:26:31.513 [2024-11-20T08:58:54.845Z] Total : 27503.89 107.44 0.00 0.00 4648.30 1809.36 12480.33 00:26:31.513 { 00:26:31.513 "results": [ 00:26:31.513 { 00:26:31.513 "job": "nvme0n1", 00:26:31.513 "core_mask": "0x2", 00:26:31.513 "workload": "randwrite", 00:26:31.513 "status": "finished", 00:26:31.513 "queue_depth": 128, 00:26:31.513 "io_size": 4096, 00:26:31.513 "runtime": 2.005753, 00:26:31.513 "iops": 27503.885074582962, 00:26:31.513 "mibps": 107.4370510725897, 00:26:31.513 "io_failed": 0, 00:26:31.513 "io_timeout": 0, 00:26:31.513 "avg_latency_us": 4648.299025770441, 00:26:31.513 "min_latency_us": 1809.3634782608697, 00:26:31.513 "max_latency_us": 12480.333913043478 00:26:31.513 } 00:26:31.513 ], 00:26:31.513 "core_count": 1 00:26:31.513 } 00:26:31.513 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.513 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.513 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.513 | .driver_specific 00:26:31.514 | .nvme_error 00:26:31.514 | .status_code 00:26:31.514 | .command_transient_transport_error' 00:26:31.514 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3061203 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3061203 ']' 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3061203 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061203 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061203' 00:26:31.773 killing process with pid 3061203 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3061203 00:26:31.773 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.773 00:26:31.773 Latency(us) 00:26:31.773 [2024-11-20T08:58:55.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.773 [2024-11-20T08:58:55.105Z] =================================================================================================================== 00:26:31.773 [2024-11-20T08:58:55.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.773 09:58:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3061203 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3061782 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3061782 /var/tmp/bperf.sock 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3061782 ']' 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.032 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.032 [2024-11-20 09:58:55.159891] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:32.032 [2024-11-20 09:58:55.159938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061782 ] 00:26:32.032 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.032 Zero copy mechanism will not be used. 00:26:32.032 [2024-11-20 09:58:55.234775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.032 [2024-11-20 09:58:55.272338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.291 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.550 nvme0n1 00:26:32.550 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:32.550 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.550 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.810 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.810 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:32.810 09:58:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.810 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.810 Zero copy mechanism will not be used. 00:26:32.810 Running I/O for 2 seconds... 00:26:32.810 [2024-11-20 09:58:55.969383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:55.969471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:55.969500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:55.973919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:55.973991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:55.974015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:55.978539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:55.978610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:55.978633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:55.982821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:55.982922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:55.982945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:55.987038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:55.987091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:55.987111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:55.991252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:55.991321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:55.991342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:55.995570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:55.995631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:55.995655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:56.000020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:56.000084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:56.000103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:56.004150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:56.004230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:56.004249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:56.008381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:56.008437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:56.008457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:56.012485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:56.012553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:56.012573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:56.016605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:56.016666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:56.016686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:56.020724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:56.020778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:56.020798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:56.024900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.810 [2024-11-20 09:58:56.024961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.810 [2024-11-20 09:58:56.024980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.810 [2024-11-20 09:58:56.029019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.029087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.029106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.033123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.033221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.033241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.037944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.038113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.038134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.044016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.044190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.044212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.049726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.049886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.049905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.054690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.054749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.054769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.059808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.059908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.059928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.065697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.065787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.065806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.070738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.070825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.070845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.075814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.075976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.075996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.081268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.081382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.081402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.086467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.086563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.086582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.091683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.091764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.091783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.096824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.096896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.096915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.102164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.102276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.102295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.107005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.107061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.107080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.111793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.112014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.112034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.116394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.116634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.116655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.121576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.121821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.121845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.126287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.126529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.126549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.130558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.130804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.130825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.134926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.135187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.135209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.811 [2024-11-20 09:58:56.139038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:32.811 [2024-11-20 09:58:56.139289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.811 [2024-11-20 09:58:56.139310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.071 [2024-11-20 09:58:56.143340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.143602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.143624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.147842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.148092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.148113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.151999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.152252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.152272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.155914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.156173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.156194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.159902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.160174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.160196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.163874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.164143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.164164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.167827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.168080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.168101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.171770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.172030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.172051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.175803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.176057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.176077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.180061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.180301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.180323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.184907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.185166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.185188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.189548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.189794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.189814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.194307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.194534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.194554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.198627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.198874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.198894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.203216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.203460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.203480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.207615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.207878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.207898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.212029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.212281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.212302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.216029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.216282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.216302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.220347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.220598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.220618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.224828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.225079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.225099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.230699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.230939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.230967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.235768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.236021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.236046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.240288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.072 [2024-11-20 09:58:56.240532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.072 [2024-11-20 09:58:56.240552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.072 [2024-11-20 09:58:56.244607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.244857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.244878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.249002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.249253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.249273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.253521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.253773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.253793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.258049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.258282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.258302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.262322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.262558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.262578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.266756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.267014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.267035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.271225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.271489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.271509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.275682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.275934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.275962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.279809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.280057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.280078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.284197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.284452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.284473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.288649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.288895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.288916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.293924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.294164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.294185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.298764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.299024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.299045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.303142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.303394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.303414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.307456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.307713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.307733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.311790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.312057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.312077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.316563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.316827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.316848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.321515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.321764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.321785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.325625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.325869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.325889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.329618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.329864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.329884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.333643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.333889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.333910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.337649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.337903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.073 [2024-11-20 09:58:56.337924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.073 [2024-11-20 09:58:56.341701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.073 [2024-11-20 09:58:56.341965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.341986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.345742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.346001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.346021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.349770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.350027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.350051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.353776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.354027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.354048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.357724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.357980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.358000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.361690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.361941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.361967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.365673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.365924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.365944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.369739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.369989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.370009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.373699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.373954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.373974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.377684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.377935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.377961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.381641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.381891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.381911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.385622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.385875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.385895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.389583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.389828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.389848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.393514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.393763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.393783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.074 [2024-11-20 09:58:56.397539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.074 [2024-11-20 09:58:56.397790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.074 [2024-11-20 09:58:56.397810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.401586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.401848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.401868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.405622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.405872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.405892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.409615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.409874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.409894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.413804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.414045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.414065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.418635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.418877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.418897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.423569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.423799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.423819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.428966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.429217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.429237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.433846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.434074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.434094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.439044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.439283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.439304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.443473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.443711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.443731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.447873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.448118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.448139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.452089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.452328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.452348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.456452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.456695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.456715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.334 [2024-11-20 09:58:56.460745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.334 [2024-11-20 09:58:56.461013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.334 [2024-11-20 09:58:56.461036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.465079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.465319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.465339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.469601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.469848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.469868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.474065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.474313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.474333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.478433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.478682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.478703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.482848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.483095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.483114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.487410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.487662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.487682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.492168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.492415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.492436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.496895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.497126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.497147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.501889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.502135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.502156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.506601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.506832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.506853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.511698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.511963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.511984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.516639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.516873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.516893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.521559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.521797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.521817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.526418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.526664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.526684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.530903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.531136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.531157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.535135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.535395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.535416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.539492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.539741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.539761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.544020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.544253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.544274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.548708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.548936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.548963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.553167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.553403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.553423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.557618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.557859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.557879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.561984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.562231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.562251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.566227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.566480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.566500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.570401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.570645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.570666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.574656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.574903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.574923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.578816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.579063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.579086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.335 [2024-11-20 09:58:56.582994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.335 [2024-11-20 09:58:56.583246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.335 [2024-11-20 09:58:56.583266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.587178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.587423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.587443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.591345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.591594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.591614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.595555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.595803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.595823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.599751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.599997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.600017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.603961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.604210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.604240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.608166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.608417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.608437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.612370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.612612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.612632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.616533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.616784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.616804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.620685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.621000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.621020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.625096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.625345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.625366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.629320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.629576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.629596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.633514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.633766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.633787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.637696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.637941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.637967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.641891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.642133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.642153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.646095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.646354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.646374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.650289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.650548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.650568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.654502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.654749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.654769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.658654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.336 [2024-11-20 09:58:56.658896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.336 [2024-11-20 09:58:56.658917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.336 [2024-11-20 09:58:56.662883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.596 [2024-11-20 09:58:56.663137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.596 [2024-11-20 09:58:56.663159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.596 [2024-11-20 09:58:56.667126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.596 [2024-11-20 09:58:56.667375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.596 [2024-11-20 09:58:56.667396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.596 [2024-11-20 09:58:56.671355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.596 [2024-11-20 09:58:56.671592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.671612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.676075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.676401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.676422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.682013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.682331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.682352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.686809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.687060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.687080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.691403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.691654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.691678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.695906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.696159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.696180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.700492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.700729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.700749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.705162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.705421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.705441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.710161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.710405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.710425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.715016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.715270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.715291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.719670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.719922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.719942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.724177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.724424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.724444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.728729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.728997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.729019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.733287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.733562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.733584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.738116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.738381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.738402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.743034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.743273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.743293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.748269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.748516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.748536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.754095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.754328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.754350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.758702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.758941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.758971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.763258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.763505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.763526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.767736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.767980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.768000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.772262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.772508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.772529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.776692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.776937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.776964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.781167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.781418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.781438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.785625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.785876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.785897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.790077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.790320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.790341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.794567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.597 [2024-11-20 09:58:56.794809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.597 [2024-11-20 09:58:56.794830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.597 [2024-11-20 09:58:56.799016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.799264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.799284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.803403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.803650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.803670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.807935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.808189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.808209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.812432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.812679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.812703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.816954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.817197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.817218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.821690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.821937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.821966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.827133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.827372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.827392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.832439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.832688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.832709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.838413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.838682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.838703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.844520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.844816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.844837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.851062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.851324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.851345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.857384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.857546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.857566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.863715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.863935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.863969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.869628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.869874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.869894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.875241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.875488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.875509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.881270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.881521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.881542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.888145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.888341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.888362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.893974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.894208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.894229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.898566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.898781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.898802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.903334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.903587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.903607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.909152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.909430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.909450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.914893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.915110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.915131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.919941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.920161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.920182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.598 [2024-11-20 09:58:56.924720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.598 [2024-11-20 09:58:56.924933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.598 [2024-11-20 09:58:56.924961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.929638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.929893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.929914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.934533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.934758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.934795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.939416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.939650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.939671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.944316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.944529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.944549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.948993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.949204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.949225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.953747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.954016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.954041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.958722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.958970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.958991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.963519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.963758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.963779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.968210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.968465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.968486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.859 6742.00 IOPS, 842.75 MiB/s [2024-11-20T08:58:57.191Z] [2024-11-20 09:58:56.974259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.974486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.974505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.978533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.978759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.978780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.982712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.982897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.859 [2024-11-20 09:58:56.982916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.859 [2024-11-20 09:58:56.986735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.859 [2024-11-20 09:58:56.986901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:56.986920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:56.990715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:56.990898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:56.990917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:56.994647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:56.994828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:56.994847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:56.998596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:56.998741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:56.998760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.002531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.002688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.002706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.006422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.006574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.006594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.010307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.010468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.010488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.014246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.014397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.014416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.018315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.018446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.018464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.022863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.023021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.023040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.027624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.027747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.027766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.032680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.032832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.032851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.037599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.037747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.037766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.042259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.042377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.042396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.047177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.047327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.047346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.052525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.052687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.052705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.057558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.057743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.057761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.062094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.062227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.062246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.067242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.067374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.067393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.071835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.072034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.072062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.076565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.076697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.076715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.081170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.081321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.081340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.086583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.086768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.086787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.091594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.091743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.091762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.096411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.096542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.096563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.100836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.100974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.100994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.105341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.105470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.860 [2024-11-20 09:58:57.105489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.860 [2024-11-20 09:58:57.110333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.860 [2024-11-20 09:58:57.110484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.110503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.114976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.115118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.115137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.119728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.119874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.119893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.124123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.124266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.124285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.128197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.128352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.128373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.132169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.132318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.132337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.136334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.136491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.136512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.140449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.140601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.140621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.144533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.144686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.144705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.148540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.148684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.148703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.152601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.152750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.152769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.156689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.156820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.156838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.161160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.161314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.161333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.165992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.166148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.166170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.170362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.170511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.170531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.174556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.174684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.174703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.179586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.179734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.179753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.861 [2024-11-20 09:58:57.183945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:33.861 [2024-11-20 09:58:57.184107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.861 [2024-11-20 09:58:57.184126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.188543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.188680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.188703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.193188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.193329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.193348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.197825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.197964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.197983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.202378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.202551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.202572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.206490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.206639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.206658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.210455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.210607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.210627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.214410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.214566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.214585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.218502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.218666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.218685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.222578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.222727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.222746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.226667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.226830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.226850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.230739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.230900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.230920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.234680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.234839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.234858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.238623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.238781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.238802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.242551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.242704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.242722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.246453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.246606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.246625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.250309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.122 [2024-11-20 09:58:57.250477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.122 [2024-11-20 09:58:57.250498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.122 [2024-11-20 09:58:57.254175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.254331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.254349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.258050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.258202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.258223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.261964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.262103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.262122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.265870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.266027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.266046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.269773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.269919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.269937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.273837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.273992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.274011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.278360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.278498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.278517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.282800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.282951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.282971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.287767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.287911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.287930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.292412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.292663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.292684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.296411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.296577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.296600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.300384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.300531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.300549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.304432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.304585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.304605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.308470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.308621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.308642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.312465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.312615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.312634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.316436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.316603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.316624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.320472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.320615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.320633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.324458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.324611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.324630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.328499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.328657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.328676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.332505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.332663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.332683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.336545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.336713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.336732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.340788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.340958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.340979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.345044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.345206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.345226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.349055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.349220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.349240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.354015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.354267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.354288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.358880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.359077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.123 [2024-11-20 09:58:57.359097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.123 [2024-11-20 09:58:57.363194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.123 [2024-11-20 09:58:57.363337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.363356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.367496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.367664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.367683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.371853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.372078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.372098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.376098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.376225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.376244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.380419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.380598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.380617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.385294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.385458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.385477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.390435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.390609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.390628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.396296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.396551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.396572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.402672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.402862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.402881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.408873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.409071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.409090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.416443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.416646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.416670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.422447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.422539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.422559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.427115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.427273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.427292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.432336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.432498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.432516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.437248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.437362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.437382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.442345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.442554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.442574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.124 [2024-11-20 09:58:57.447694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.124 [2024-11-20 09:58:57.447837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.124 [2024-11-20 09:58:57.447856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.385 [2024-11-20 09:58:57.453207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.385 [2024-11-20 09:58:57.453328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.385 [2024-11-20 09:58:57.453348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.385 [2024-11-20 09:58:57.457976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.385 [2024-11-20 09:58:57.458121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.385 [2024-11-20 09:58:57.458140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.385 [2024-11-20 09:58:57.462452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.385 [2024-11-20 09:58:57.462587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.385 [2024-11-20 09:58:57.462607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.385 [2024-11-20 09:58:57.467373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.385 [2024-11-20 09:58:57.467502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.385 [2024-11-20 09:58:57.467521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.385 [2024-11-20 09:58:57.472438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.385 [2024-11-20 09:58:57.472574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.385 [2024-11-20 09:58:57.472594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.385 [2024-11-20 09:58:57.476861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.385 [2024-11-20 09:58:57.476995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.385 [2024-11-20 09:58:57.477014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.385 [2024-11-20 09:58:57.481776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.385 [2024-11-20 09:58:57.481944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.385 [2024-11-20 09:58:57.481972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.385 [2024-11-20 09:58:57.486466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.385 [2024-11-20 09:58:57.486615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.385 [2024-11-20 09:58:57.486635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.385 [2024-11-20 09:58:57.491209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.491356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.491375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.495853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.495995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.496014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.500406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.500547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.500565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.504466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.504621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.504639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.508508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.508666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.508685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.512443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.512593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.512612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.517294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.517451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.517469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.521289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.521443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.521462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.525276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.525440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.525459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.529279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.529452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.529470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.533223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.533383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.533404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.537154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.537314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.537336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.541045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.541198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.541217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.544937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.545105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.545123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.548823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.549005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.549024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.552705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.552869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.552887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.556604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.556767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.556786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.560509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.560679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.560697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.564422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.564608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.564628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.568323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.568494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.568513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.572220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.572381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.572401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.576134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.576298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.576317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.580030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.580189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.580208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.583929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.584101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.584120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.587844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.588019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.588038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.591730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.591902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.591921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.595737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.386 [2024-11-20 09:58:57.595907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.386 [2024-11-20 09:58:57.595926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.386 [2024-11-20 09:58:57.600298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.600560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.600581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.605496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.605776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.605796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.610548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.610776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.610794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.615637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.615937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.615966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.620743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.621118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.621140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.626334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.626504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.626523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.631930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.632143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.632163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.637144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.637356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.637377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.642554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.642835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.642856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.647831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.648017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.648037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.653183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.653477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.653500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.658427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.658708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.658727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.664007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.664281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.664301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.669444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.669676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.669697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.674559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.674824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.674845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.679927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.680094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.680113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.685085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.685297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.685317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.689843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.689994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.690014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.694058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.694168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.694187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.698239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.698365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.698383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.702404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.702526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.702544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.706749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.706927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.706946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.387 [2024-11-20 09:58:57.711010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.387 [2024-11-20 09:58:57.711174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.387 [2024-11-20 09:58:57.711193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.715442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.715581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.715602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.719623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.719725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.719745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.723710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.723833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.723852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.727762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.727891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.727911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.731906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.732021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.732041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.735937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.736063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.736083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.740055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.740202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.740221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.744201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.744341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.744360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.748277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.748419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.748438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.752339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.752453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.752473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.756313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.756433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.756452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.760365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.760484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.760503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.765067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.765301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.765322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.770061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.770167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.770190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.774141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.774280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.774299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.778954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.779095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.779115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.783522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.783627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.783646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.787648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.787776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.787795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.792529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.792706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.792725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.797506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.797628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.797646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.802370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.802584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.802605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.807702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.807921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.807941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.813523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.813666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.813685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.820739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.820880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.820899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.826349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.826512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.648 [2024-11-20 09:58:57.826531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.648 [2024-11-20 09:58:57.832347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.648 [2024-11-20 09:58:57.832485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.832505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.839268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.839476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.839497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.845140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.845338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.845366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.850368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.850538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.850557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.855669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.855793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.855812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.860051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.860144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.860163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.864203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.864325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.864344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.868465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.868595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.868615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.872648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.872784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.872803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.876789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.876925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.876944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.881034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.881141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.881160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.885260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.885358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.885377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.889984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.890114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.890132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.894489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.894573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.894592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.900180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.900331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.900353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.905116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.905230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.905248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.910955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.911065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.911084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.916562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.916785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.916806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.923098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.923281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.923300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.928476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.928607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.928625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.933840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.933973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.933992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.939397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.939527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.939546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.944721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.944845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.944865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.950498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.950696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.950725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.955755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.956014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.956034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.961075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.961243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.961262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.966211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.966336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.649 [2024-11-20 09:58:57.966355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.649 [2024-11-20 09:58:57.971268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1212b20) with pdu=0x2000166ff3c8 00:26:34.649 [2024-11-20 09:58:57.971396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.650 [2024-11-20 09:58:57.971415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.908 6749.00 IOPS, 843.62 MiB/s 00:26:34.909 Latency(us) 00:26:34.909 [2024-11-20T08:58:58.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.909 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:34.909 nvme0n1 : 2.00 6745.80 843.23 0.00 0.00 2367.58 1424.70 7009.50 00:26:34.909 [2024-11-20T08:58:58.241Z] =================================================================================================================== 00:26:34.909 [2024-11-20T08:58:58.241Z] Total : 6745.80 843.23 0.00 0.00 2367.58 1424.70 7009.50 00:26:34.909 { 00:26:34.909 "results": [ 00:26:34.909 { 00:26:34.909 "job": "nvme0n1", 00:26:34.909 "core_mask": "0x2", 00:26:34.909 "workload": "randwrite", 00:26:34.909 "status": "finished", 00:26:34.909 "queue_depth": 16, 00:26:34.909 "io_size": 131072, 00:26:34.909 "runtime": 2.00332, 00:26:34.909 "iops": 6745.801968731905, 00:26:34.909 "mibps": 843.2252460914881, 00:26:34.909 "io_failed": 0, 00:26:34.909 "io_timeout": 0, 00:26:34.909 "avg_latency_us": 2367.5776628423987, 00:26:34.909 "min_latency_us": 1424.695652173913, 00:26:34.909 "max_latency_us": 7009.502608695652 00:26:34.909 } 00:26:34.909 ], 00:26:34.909 "core_count": 1 00:26:34.909 } 00:26:34.909 09:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:34.909 09:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:34.909 09:58:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:34.909 | .driver_specific 00:26:34.909 | .nvme_error 00:26:34.909 | .status_code 00:26:34.909 | .command_transient_transport_error' 00:26:34.909 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:34.909 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 436 > 0 )) 00:26:34.909 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3061782 00:26:34.909 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3061782 ']' 00:26:34.909 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3061782 00:26:34.909 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:34.909 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.909 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061782 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061782' 00:26:35.168 killing process with pid 3061782 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3061782 00:26:35.168 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.168 00:26:35.168 Latency(us) 00:26:35.168 [2024-11-20T08:58:58.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.168 [2024-11-20T08:58:58.500Z] =================================================================================================================== 00:26:35.168 [2024-11-20T08:58:58.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3061782 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3060120 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3060120 ']' 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3060120 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060120 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060120' 00:26:35.168 killing process with pid 3060120 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3060120 00:26:35.168 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3060120 00:26:35.428 00:26:35.428 real 0m13.788s 00:26:35.428 user 0m26.311s 00:26:35.428 sys 0m4.647s 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.428 ************************************ 00:26:35.428 END TEST nvmf_digest_error 00:26:35.428 ************************************ 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:35.428 rmmod nvme_tcp 00:26:35.428 rmmod nvme_fabrics 00:26:35.428 rmmod nvme_keyring 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3060120 ']' 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3060120 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3060120 ']' 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3060120 00:26:35.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3060120) - No such process 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3060120 is not found' 00:26:35.428 Process with pid 3060120 is not found 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.428 09:58:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.973 09:59:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:37.974 00:26:37.974 real 0m36.430s 00:26:37.974 user 0m55.530s 00:26:37.974 sys 0m13.707s 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:37.974 ************************************ 00:26:37.974 END TEST nvmf_digest 00:26:37.974 ************************************ 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.974 ************************************ 00:26:37.974 START TEST nvmf_bdevperf 00:26:37.974 ************************************ 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:37.974 * Looking for test storage... 00:26:37.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # lcov --version 00:26:37.974 09:59:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:26:37.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.974 --rc genhtml_branch_coverage=1 00:26:37.974 --rc genhtml_function_coverage=1 00:26:37.974 --rc genhtml_legend=1 00:26:37.974 --rc geninfo_all_blocks=1 00:26:37.974 --rc geninfo_unexecuted_blocks=1 00:26:37.974 00:26:37.974 ' 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:26:37.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.974 --rc genhtml_branch_coverage=1 00:26:37.974 --rc genhtml_function_coverage=1 00:26:37.974 --rc genhtml_legend=1 00:26:37.974 --rc geninfo_all_blocks=1 00:26:37.974 --rc geninfo_unexecuted_blocks=1 00:26:37.974 00:26:37.974 ' 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:26:37.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.974 --rc genhtml_branch_coverage=1 00:26:37.974 --rc genhtml_function_coverage=1 00:26:37.974 --rc genhtml_legend=1 00:26:37.974 --rc geninfo_all_blocks=1 00:26:37.974 --rc geninfo_unexecuted_blocks=1 00:26:37.974 00:26:37.974 ' 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:26:37.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.974 --rc genhtml_branch_coverage=1 00:26:37.974 --rc genhtml_function_coverage=1 00:26:37.974 --rc genhtml_legend=1 00:26:37.974 --rc geninfo_all_blocks=1 00:26:37.974 --rc geninfo_unexecuted_blocks=1 00:26:37.974 00:26:37.974 ' 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.974 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:37.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:37.975 09:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.542 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:44.543 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:44.543 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:44.543 Found net devices under 0000:86:00.0: cvl_0_0 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:44.543 Found net devices under 0000:86:00.1: cvl_0_1 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:26:44.543 00:26:44.543 --- 10.0.0.2 ping statistics --- 00:26:44.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.543 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:26:44.543 00:26:44.543 --- 10.0.0.1 ping statistics --- 00:26:44.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.543 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.543 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.544 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.544 09:59:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3065794 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3065794 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3065794 ']' 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.544 [2024-11-20 09:59:07.083649] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:44.544 [2024-11-20 09:59:07.083691] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.544 [2024-11-20 09:59:07.162812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:44.544 [2024-11-20 09:59:07.205341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.544 [2024-11-20 09:59:07.205380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.544 [2024-11-20 09:59:07.205387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.544 [2024-11-20 09:59:07.205393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.544 [2024-11-20 09:59:07.205398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.544 [2024-11-20 09:59:07.206808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.544 [2024-11-20 09:59:07.206911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.544 [2024-11-20 09:59:07.206911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.544 [2024-11-20 09:59:07.350313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.544 Malloc0 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:44.544 [2024-11-20 09:59:07.412037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:44.544 { 00:26:44.544 "params": { 00:26:44.544 "name": "Nvme$subsystem", 00:26:44.544 "trtype": "$TEST_TRANSPORT", 00:26:44.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:44.544 "adrfam": "ipv4", 00:26:44.544 "trsvcid": "$NVMF_PORT", 00:26:44.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:44.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:44.544 "hdgst": ${hdgst:-false}, 00:26:44.544 "ddgst": ${ddgst:-false} 00:26:44.544 }, 00:26:44.544 "method": "bdev_nvme_attach_controller" 00:26:44.544 } 00:26:44.544 EOF 00:26:44.544 )") 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:44.544 09:59:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:44.544 "params": { 00:26:44.544 "name": "Nvme1", 00:26:44.544 "trtype": "tcp", 00:26:44.544 "traddr": "10.0.0.2", 00:26:44.544 "adrfam": "ipv4", 00:26:44.544 "trsvcid": "4420", 00:26:44.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:44.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:44.544 "hdgst": false, 00:26:44.544 "ddgst": false 00:26:44.544 }, 00:26:44.544 "method": "bdev_nvme_attach_controller" 00:26:44.544 }' 00:26:44.544 [2024-11-20 09:59:07.463163] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:44.544 [2024-11-20 09:59:07.463207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065817 ] 00:26:44.544 [2024-11-20 09:59:07.537002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.544 [2024-11-20 09:59:07.578813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.544 Running I/O for 1 seconds... 00:26:45.922 10979.00 IOPS, 42.89 MiB/s 00:26:45.922 Latency(us) 00:26:45.922 [2024-11-20T08:59:09.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.922 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:45.922 Verification LBA range: start 0x0 length 0x4000 00:26:45.922 Nvme1n1 : 1.01 10998.58 42.96 0.00 0.00 11592.98 2564.45 12423.35 00:26:45.922 [2024-11-20T08:59:09.254Z] =================================================================================================================== 00:26:45.922 [2024-11-20T08:59:09.254Z] Total : 10998.58 42.96 0.00 0.00 11592.98 2564.45 12423.35 00:26:45.922 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3066077 00:26:45.922 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:45.922 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:45.922 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:45.922 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:45.922 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:45.922 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:45.922 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:45.922 { 00:26:45.922 "params": { 00:26:45.922 "name": "Nvme$subsystem", 00:26:45.922 "trtype": "$TEST_TRANSPORT", 00:26:45.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.922 "adrfam": "ipv4", 00:26:45.922 "trsvcid": "$NVMF_PORT", 00:26:45.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.922 "hdgst": ${hdgst:-false}, 00:26:45.922 "ddgst": ${ddgst:-false} 00:26:45.923 }, 00:26:45.923 "method": "bdev_nvme_attach_controller" 00:26:45.923 } 00:26:45.923 EOF 00:26:45.923 )") 00:26:45.923 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:45.923 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:45.923 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:45.923 09:59:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:45.923 "params": { 00:26:45.923 "name": "Nvme1", 00:26:45.923 "trtype": "tcp", 00:26:45.923 "traddr": "10.0.0.2", 00:26:45.923 "adrfam": "ipv4", 00:26:45.923 "trsvcid": "4420", 00:26:45.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:45.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:45.923 "hdgst": false, 00:26:45.923 "ddgst": false 00:26:45.923 }, 00:26:45.923 "method": "bdev_nvme_attach_controller" 00:26:45.923 }' 00:26:45.923 [2024-11-20 09:59:09.079582] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:45.923 [2024-11-20 09:59:09.079633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066077 ] 00:26:45.923 [2024-11-20 09:59:09.154238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.923 [2024-11-20 09:59:09.194996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.180 Running I/O for 15 seconds... 00:26:48.497 10964.00 IOPS, 42.83 MiB/s [2024-11-20T08:59:12.091Z] 11155.50 IOPS, 43.58 MiB/s [2024-11-20T08:59:12.091Z] 09:59:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3065794 00:26:48.759 09:59:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:48.759 [2024-11-20 09:59:12.055164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.759 [2024-11-20 09:59:12.055506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.759 [2024-11-20 09:59:12.055658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.759 [2024-11-20 09:59:12.055670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.055988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.055998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.760 [2024-11-20 09:59:12.056379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.760 [2024-11-20 09:59:12.056387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.761 [2024-11-20 09:59:12.056992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.761 [2024-11-20 09:59:12.056999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-11-20 09:59:12.057372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.057381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf37cf0 is same with the state(6) to be set 00:26:48.762 [2024-11-20 09:59:12.057391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.762 [2024-11-20 09:59:12.057397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.762 [2024-11-20 09:59:12.057403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97928 len:8 PRP1 0x0 PRP2 0x0 00:26:48.762 [2024-11-20 09:59:12.057409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.762 [2024-11-20 09:59:12.060311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.762 [2024-11-20 09:59:12.060371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:48.762 [2024-11-20 09:59:12.060960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.762 [2024-11-20 09:59:12.060979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:48.762 [2024-11-20 09:59:12.060988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:48.762 [2024-11-20 09:59:12.061167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:48.762 [2024-11-20 09:59:12.061346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.762 [2024-11-20 09:59:12.061356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.762 [2024-11-20 09:59:12.061364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.762 [2024-11-20 09:59:12.061373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.762 [2024-11-20 09:59:12.073639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:48.762 [2024-11-20 09:59:12.074061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.762 [2024-11-20 09:59:12.074119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:48.762 [2024-11-20 09:59:12.074145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:48.762 [2024-11-20 09:59:12.074725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:48.762 [2024-11-20 09:59:12.074987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:48.762 [2024-11-20 09:59:12.074998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:48.762 [2024-11-20 09:59:12.075007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:48.762 [2024-11-20 09:59:12.075014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:48.762 [2024-11-20 09:59:12.086705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.087056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.024 [2024-11-20 09:59:12.087076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.024 [2024-11-20 09:59:12.087085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.024 [2024-11-20 09:59:12.087260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.024 [2024-11-20 09:59:12.087435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.024 [2024-11-20 09:59:12.087445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.024 [2024-11-20 09:59:12.087452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.024 [2024-11-20 09:59:12.087460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.024 [2024-11-20 09:59:12.099607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.099997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.024 [2024-11-20 09:59:12.100016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.024 [2024-11-20 09:59:12.100025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.024 [2024-11-20 09:59:12.100210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.024 [2024-11-20 09:59:12.100374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.024 [2024-11-20 09:59:12.100384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.024 [2024-11-20 09:59:12.100392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.024 [2024-11-20 09:59:12.100399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.024 [2024-11-20 09:59:12.112667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.113130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.024 [2024-11-20 09:59:12.113176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.024 [2024-11-20 09:59:12.113201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.024 [2024-11-20 09:59:12.113789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.024 [2024-11-20 09:59:12.114370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.024 [2024-11-20 09:59:12.114380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.024 [2024-11-20 09:59:12.114387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.024 [2024-11-20 09:59:12.114394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.024 [2024-11-20 09:59:12.125605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.126052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.024 [2024-11-20 09:59:12.126070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.024 [2024-11-20 09:59:12.126078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.024 [2024-11-20 09:59:12.126242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.024 [2024-11-20 09:59:12.126406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.024 [2024-11-20 09:59:12.126415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.024 [2024-11-20 09:59:12.126422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.024 [2024-11-20 09:59:12.126429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.024 [2024-11-20 09:59:12.138545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.138955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.024 [2024-11-20 09:59:12.138973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.024 [2024-11-20 09:59:12.138982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.024 [2024-11-20 09:59:12.139155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.024 [2024-11-20 09:59:12.139333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.024 [2024-11-20 09:59:12.139343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.024 [2024-11-20 09:59:12.139349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.024 [2024-11-20 09:59:12.139356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.024 [2024-11-20 09:59:12.151560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.151979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.024 [2024-11-20 09:59:12.151997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.024 [2024-11-20 09:59:12.152005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.024 [2024-11-20 09:59:12.152168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.024 [2024-11-20 09:59:12.152331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.024 [2024-11-20 09:59:12.152342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.024 [2024-11-20 09:59:12.152352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.024 [2024-11-20 09:59:12.152359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.024 [2024-11-20 09:59:12.164546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.164934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.024 [2024-11-20 09:59:12.164991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.024 [2024-11-20 09:59:12.165016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.024 [2024-11-20 09:59:12.165476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.024 [2024-11-20 09:59:12.165641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.024 [2024-11-20 09:59:12.165651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.024 [2024-11-20 09:59:12.165658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.024 [2024-11-20 09:59:12.165664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.024 [2024-11-20 09:59:12.177472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.177799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.024 [2024-11-20 09:59:12.177818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.024 [2024-11-20 09:59:12.177826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.024 [2024-11-20 09:59:12.178004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.024 [2024-11-20 09:59:12.178182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.024 [2024-11-20 09:59:12.178193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.024 [2024-11-20 09:59:12.178200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.024 [2024-11-20 09:59:12.178206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.024 [2024-11-20 09:59:12.190453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.190859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.024 [2024-11-20 09:59:12.190877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.024 [2024-11-20 09:59:12.190885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.024 [2024-11-20 09:59:12.191073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.024 [2024-11-20 09:59:12.191248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.024 [2024-11-20 09:59:12.191258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.024 [2024-11-20 09:59:12.191265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.024 [2024-11-20 09:59:12.191272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.024 [2024-11-20 09:59:12.203325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.024 [2024-11-20 09:59:12.203683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.203701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.203709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.203881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.204063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.204074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.204081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.204089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.025 [2024-11-20 09:59:12.216309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.025 [2024-11-20 09:59:12.216731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.216748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.216755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.216918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.217086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.217097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.217103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.217110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.025 [2024-11-20 09:59:12.229197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.025 [2024-11-20 09:59:12.229652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.229697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.229721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.230249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.230640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.230658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.230673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.230687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.025 [2024-11-20 09:59:12.244019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.025 [2024-11-20 09:59:12.244472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.244498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.244510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.244764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.245025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.245040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.245051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.245061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.025 [2024-11-20 09:59:12.257081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.025 [2024-11-20 09:59:12.257496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.257513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.257521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.257689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.257858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.257868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.257875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.257881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.025 [2024-11-20 09:59:12.270025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.025 [2024-11-20 09:59:12.270384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.270428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.270453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.271045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.271637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.271646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.271653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.271659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.025 [2024-11-20 09:59:12.282966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.025 [2024-11-20 09:59:12.283335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.283381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.283406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.283882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.284053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.284062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.284069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.284075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.025 [2024-11-20 09:59:12.295967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.025 [2024-11-20 09:59:12.296310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.296355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.296378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.296875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.297064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.297074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.297080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.297087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.025 [2024-11-20 09:59:12.308896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.025 [2024-11-20 09:59:12.309281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.309300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.309308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.309480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.309655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.309665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.309673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.309682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.025 [2024-11-20 09:59:12.321911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.025 [2024-11-20 09:59:12.322349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.025 [2024-11-20 09:59:12.322396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.025 [2024-11-20 09:59:12.322421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.025 [2024-11-20 09:59:12.322687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.025 [2024-11-20 09:59:12.322862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.025 [2024-11-20 09:59:12.322873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.025 [2024-11-20 09:59:12.322885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.025 [2024-11-20 09:59:12.322892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.026 [2024-11-20 09:59:12.334941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.026 [2024-11-20 09:59:12.335319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.026 [2024-11-20 09:59:12.335364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.026 [2024-11-20 09:59:12.335388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.026 [2024-11-20 09:59:12.335864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.026 [2024-11-20 09:59:12.336045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.026 [2024-11-20 09:59:12.336056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.026 [2024-11-20 09:59:12.336063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.026 [2024-11-20 09:59:12.336070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.026 [2024-11-20 09:59:12.348040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.026 [2024-11-20 09:59:12.348454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.026 [2024-11-20 09:59:12.348472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.026 [2024-11-20 09:59:12.348480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.026 [2024-11-20 09:59:12.348653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.026 [2024-11-20 09:59:12.348827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.026 [2024-11-20 09:59:12.348837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.026 [2024-11-20 09:59:12.348843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.026 [2024-11-20 09:59:12.348850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.287 [2024-11-20 09:59:12.361010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.287 [2024-11-20 09:59:12.361426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.287 [2024-11-20 09:59:12.361443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.287 [2024-11-20 09:59:12.361451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.287 [2024-11-20 09:59:12.361615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.287 [2024-11-20 09:59:12.361778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.287 [2024-11-20 09:59:12.361788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.287 [2024-11-20 09:59:12.361795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.287 [2024-11-20 09:59:12.361802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.287 [2024-11-20 09:59:12.374014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.287 [2024-11-20 09:59:12.374436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.287 [2024-11-20 09:59:12.374483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.287 [2024-11-20 09:59:12.374507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.287 [2024-11-20 09:59:12.375060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.375235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.375245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.288 [2024-11-20 09:59:12.375251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.288 [2024-11-20 09:59:12.375258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.288 [2024-11-20 09:59:12.386976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.288 [2024-11-20 09:59:12.387343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.288 [2024-11-20 09:59:12.387388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.288 [2024-11-20 09:59:12.387413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.288 [2024-11-20 09:59:12.387903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.388096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.388107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.288 [2024-11-20 09:59:12.388113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.288 [2024-11-20 09:59:12.388121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.288 [2024-11-20 09:59:12.399789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.288 [2024-11-20 09:59:12.400134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.288 [2024-11-20 09:59:12.400151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.288 [2024-11-20 09:59:12.400159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.288 [2024-11-20 09:59:12.400322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.400485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.400495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.288 [2024-11-20 09:59:12.400502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.288 [2024-11-20 09:59:12.400508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.288 [2024-11-20 09:59:12.412746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.288 [2024-11-20 09:59:12.413157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.288 [2024-11-20 09:59:12.413175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.288 [2024-11-20 09:59:12.413186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.288 [2024-11-20 09:59:12.413350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.413514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.413523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.288 [2024-11-20 09:59:12.413530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.288 [2024-11-20 09:59:12.413536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.288 [2024-11-20 09:59:12.425535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.288 [2024-11-20 09:59:12.425957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.288 [2024-11-20 09:59:12.425975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.288 [2024-11-20 09:59:12.425982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.288 [2024-11-20 09:59:12.426146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.426309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.426319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.288 [2024-11-20 09:59:12.426325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.288 [2024-11-20 09:59:12.426331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.288 [2024-11-20 09:59:12.438402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.288 [2024-11-20 09:59:12.438836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.288 [2024-11-20 09:59:12.438881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.288 [2024-11-20 09:59:12.438905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.288 [2024-11-20 09:59:12.439498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.440095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.440106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.288 [2024-11-20 09:59:12.440113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.288 [2024-11-20 09:59:12.440119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.288 [2024-11-20 09:59:12.451253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.288 [2024-11-20 09:59:12.451656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.288 [2024-11-20 09:59:12.451701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.288 [2024-11-20 09:59:12.451726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.288 [2024-11-20 09:59:12.452138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.452316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.452326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.288 [2024-11-20 09:59:12.452333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.288 [2024-11-20 09:59:12.452342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.288 [2024-11-20 09:59:12.464152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.288 [2024-11-20 09:59:12.464586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.288 [2024-11-20 09:59:12.464631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.288 [2024-11-20 09:59:12.464656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.288 [2024-11-20 09:59:12.465202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.465377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.465387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.288 [2024-11-20 09:59:12.465394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.288 [2024-11-20 09:59:12.465402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.288 [2024-11-20 09:59:12.477048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.288 [2024-11-20 09:59:12.477464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.288 [2024-11-20 09:59:12.477481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.288 [2024-11-20 09:59:12.477489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.288 [2024-11-20 09:59:12.477653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.477818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.477828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.288 [2024-11-20 09:59:12.477834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.288 [2024-11-20 09:59:12.477840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.288 9499.67 IOPS, 37.11 MiB/s [2024-11-20T08:59:12.620Z] [2024-11-20 09:59:12.490899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.288 [2024-11-20 09:59:12.491333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.288 [2024-11-20 09:59:12.491380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.288 [2024-11-20 09:59:12.491404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.288 [2024-11-20 09:59:12.491997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.288 [2024-11-20 09:59:12.492435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.288 [2024-11-20 09:59:12.492445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.492456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.492464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.289 [2024-11-20 09:59:12.503691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.289 [2024-11-20 09:59:12.504118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.289 [2024-11-20 09:59:12.504163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.289 [2024-11-20 09:59:12.504187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.289 [2024-11-20 09:59:12.504766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.289 [2024-11-20 09:59:12.505251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.289 [2024-11-20 09:59:12.505261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.505268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.505275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.289 [2024-11-20 09:59:12.516500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.289 [2024-11-20 09:59:12.516895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.289 [2024-11-20 09:59:12.516939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.289 [2024-11-20 09:59:12.516979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.289 [2024-11-20 09:59:12.517434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.289 [2024-11-20 09:59:12.517600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.289 [2024-11-20 09:59:12.517610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.517616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.517623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.289 [2024-11-20 09:59:12.529291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.289 [2024-11-20 09:59:12.529683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.289 [2024-11-20 09:59:12.529699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.289 [2024-11-20 09:59:12.529707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.289 [2024-11-20 09:59:12.529869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.289 [2024-11-20 09:59:12.530058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.289 [2024-11-20 09:59:12.530068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.530075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.530082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.289 [2024-11-20 09:59:12.542220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.289 [2024-11-20 09:59:12.542614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.289 [2024-11-20 09:59:12.542632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.289 [2024-11-20 09:59:12.542640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.289 [2024-11-20 09:59:12.542802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.289 [2024-11-20 09:59:12.542972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.289 [2024-11-20 09:59:12.542982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.542989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.542995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.289 [2024-11-20 09:59:12.555191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.289 [2024-11-20 09:59:12.555570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.289 [2024-11-20 09:59:12.555587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.289 [2024-11-20 09:59:12.555595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.289 [2024-11-20 09:59:12.555757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.289 [2024-11-20 09:59:12.555919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.289 [2024-11-20 09:59:12.555929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.555936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.555942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.289 [2024-11-20 09:59:12.568096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.289 [2024-11-20 09:59:12.568471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.289 [2024-11-20 09:59:12.568514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.289 [2024-11-20 09:59:12.568538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.289 [2024-11-20 09:59:12.569131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.289 [2024-11-20 09:59:12.569559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.289 [2024-11-20 09:59:12.569569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.569577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.569585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.289 [2024-11-20 09:59:12.581272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.289 [2024-11-20 09:59:12.581627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.289 [2024-11-20 09:59:12.581645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.289 [2024-11-20 09:59:12.581659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.289 [2024-11-20 09:59:12.581831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.289 [2024-11-20 09:59:12.582018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.289 [2024-11-20 09:59:12.582028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.582034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.582041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.289 [2024-11-20 09:59:12.594302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.289 [2024-11-20 09:59:12.594745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.289 [2024-11-20 09:59:12.594789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.289 [2024-11-20 09:59:12.594814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.289 [2024-11-20 09:59:12.595324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.289 [2024-11-20 09:59:12.595490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.289 [2024-11-20 09:59:12.595500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.595507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.595513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.289 [2024-11-20 09:59:12.607213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.289 [2024-11-20 09:59:12.607614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.289 [2024-11-20 09:59:12.607656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.289 [2024-11-20 09:59:12.607680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.289 [2024-11-20 09:59:12.608270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.289 [2024-11-20 09:59:12.608660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.289 [2024-11-20 09:59:12.608670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.289 [2024-11-20 09:59:12.608676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.289 [2024-11-20 09:59:12.608682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.550 [2024-11-20 09:59:12.620184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.550 [2024-11-20 09:59:12.620573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.550 [2024-11-20 09:59:12.620589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.550 [2024-11-20 09:59:12.620597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.550 [2024-11-20 09:59:12.620762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.550 [2024-11-20 09:59:12.620951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.550 [2024-11-20 09:59:12.620963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.550 [2024-11-20 09:59:12.620970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.550 [2024-11-20 09:59:12.620978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.550 [2024-11-20 09:59:12.633098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.550 [2024-11-20 09:59:12.633521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.550 [2024-11-20 09:59:12.633580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.633605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.634197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.634738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.634757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.634772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.634787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.647994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.648487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.648510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.648520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.648774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.649035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.649049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.649059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.649069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.661005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.661434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.661486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.661510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.662104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.662591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.662601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.662611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.662618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.673890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.674301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.674339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.674365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.674904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.675073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.675082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.675088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.675094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.686699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.687114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.687132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.687140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.687304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.687473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.687483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.687489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.687496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.699500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.699909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.699967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.699993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.700459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.700624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.700634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.700640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.700647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.712447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.712880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.712924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.712962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.713482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.713657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.713667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.713674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.713681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.725313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.725730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.725747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.725754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.725917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.726112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.726122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.726129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.726137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.738191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.738581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.738598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.738605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.738767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.738931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.738941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.738953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.738960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.751335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.751760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.751777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.751788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.751958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.752122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.752132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.752138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.752145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.764222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.551 [2024-11-20 09:59:12.764633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.551 [2024-11-20 09:59:12.764650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.551 [2024-11-20 09:59:12.764657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.551 [2024-11-20 09:59:12.764820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.551 [2024-11-20 09:59:12.765005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.551 [2024-11-20 09:59:12.765016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.551 [2024-11-20 09:59:12.765023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.551 [2024-11-20 09:59:12.765032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.551 [2024-11-20 09:59:12.777234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.552 [2024-11-20 09:59:12.777628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.552 [2024-11-20 09:59:12.777644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.552 [2024-11-20 09:59:12.777652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.552 [2024-11-20 09:59:12.777815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.552 [2024-11-20 09:59:12.777983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.552 [2024-11-20 09:59:12.777993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.552 [2024-11-20 09:59:12.778000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.552 [2024-11-20 09:59:12.778007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.552 [2024-11-20 09:59:12.790059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.552 [2024-11-20 09:59:12.790470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.552 [2024-11-20 09:59:12.790487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.552 [2024-11-20 09:59:12.790496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.552 [2024-11-20 09:59:12.790660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.552 [2024-11-20 09:59:12.790828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.552 [2024-11-20 09:59:12.790838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.552 [2024-11-20 09:59:12.790845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.552 [2024-11-20 09:59:12.790851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.552 [2024-11-20 09:59:12.802875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.552 [2024-11-20 09:59:12.803223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.552 [2024-11-20 09:59:12.803240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.552 [2024-11-20 09:59:12.803247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.552 [2024-11-20 09:59:12.803410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.552 [2024-11-20 09:59:12.803573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.552 [2024-11-20 09:59:12.803582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.552 [2024-11-20 09:59:12.803589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.552 [2024-11-20 09:59:12.803595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.552 [2024-11-20 09:59:12.815740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.552 [2024-11-20 09:59:12.816129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.552 [2024-11-20 09:59:12.816147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.552 [2024-11-20 09:59:12.816155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.552 [2024-11-20 09:59:12.816317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.552 [2024-11-20 09:59:12.816479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.552 [2024-11-20 09:59:12.816489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.552 [2024-11-20 09:59:12.816495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.552 [2024-11-20 09:59:12.816502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.552 [2024-11-20 09:59:12.828578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.552 [2024-11-20 09:59:12.828943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.552 [2024-11-20 09:59:12.828966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.552 [2024-11-20 09:59:12.828974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.552 [2024-11-20 09:59:12.829137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.552 [2024-11-20 09:59:12.829301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.552 [2024-11-20 09:59:12.829312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.552 [2024-11-20 09:59:12.829323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.552 [2024-11-20 09:59:12.829331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.552 [2024-11-20 09:59:12.841802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.552 [2024-11-20 09:59:12.842242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.552 [2024-11-20 09:59:12.842287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.552 [2024-11-20 09:59:12.842310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.552 [2024-11-20 09:59:12.842888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.552 [2024-11-20 09:59:12.843109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.552 [2024-11-20 09:59:12.843119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.552 [2024-11-20 09:59:12.843126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.552 [2024-11-20 09:59:12.843133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.552 [2024-11-20 09:59:12.854717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.552 [2024-11-20 09:59:12.855170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.552 [2024-11-20 09:59:12.855209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.552 [2024-11-20 09:59:12.855235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.552 [2024-11-20 09:59:12.855814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.552 [2024-11-20 09:59:12.856018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.552 [2024-11-20 09:59:12.856028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.552 [2024-11-20 09:59:12.856035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.552 [2024-11-20 09:59:12.856041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.552 [2024-11-20 09:59:12.867634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.552 [2024-11-20 09:59:12.868080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.552 [2024-11-20 09:59:12.868126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.552 [2024-11-20 09:59:12.868149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.552 [2024-11-20 09:59:12.868727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.552 [2024-11-20 09:59:12.869269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.552 [2024-11-20 09:59:12.869280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.552 [2024-11-20 09:59:12.869287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.552 [2024-11-20 09:59:12.869294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.813 [2024-11-20 09:59:12.880797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.813 [2024-11-20 09:59:12.881197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.813 [2024-11-20 09:59:12.881242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.813 [2024-11-20 09:59:12.881266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.813 [2024-11-20 09:59:12.881843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.813 [2024-11-20 09:59:12.882149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.813 [2024-11-20 09:59:12.882158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.813 [2024-11-20 09:59:12.882165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.813 [2024-11-20 09:59:12.882172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.813 [2024-11-20 09:59:12.893771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.813 [2024-11-20 09:59:12.894131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.813 [2024-11-20 09:59:12.894149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.813 [2024-11-20 09:59:12.894157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.813 [2024-11-20 09:59:12.894321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.813 [2024-11-20 09:59:12.894485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.813 [2024-11-20 09:59:12.894494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.813 [2024-11-20 09:59:12.894500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.813 [2024-11-20 09:59:12.894507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.813 [2024-11-20 09:59:12.906729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.813 [2024-11-20 09:59:12.907093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.813 [2024-11-20 09:59:12.907111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.813 [2024-11-20 09:59:12.907119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.813 [2024-11-20 09:59:12.907291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.813 [2024-11-20 09:59:12.907464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.813 [2024-11-20 09:59:12.907475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.813 [2024-11-20 09:59:12.907485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.813 [2024-11-20 09:59:12.907493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.813 [2024-11-20 09:59:12.919599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.813 [2024-11-20 09:59:12.920029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.813 [2024-11-20 09:59:12.920075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.813 [2024-11-20 09:59:12.920106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.813 [2024-11-20 09:59:12.920508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.813 [2024-11-20 09:59:12.920673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.813 [2024-11-20 09:59:12.920683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.813 [2024-11-20 09:59:12.920689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.813 [2024-11-20 09:59:12.920696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.813 [2024-11-20 09:59:12.932584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.813 [2024-11-20 09:59:12.933003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.813 [2024-11-20 09:59:12.933047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.813 [2024-11-20 09:59:12.933073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.813 [2024-11-20 09:59:12.933651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:12.934249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:12.934278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:12.934304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:12.934311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:12.945584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:12.946011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:12.946029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:12.946037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.814 [2024-11-20 09:59:12.946209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:12.946383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:12.946393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:12.946400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:12.946406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:12.958376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:12.958720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:12.958736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:12.958744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.814 [2024-11-20 09:59:12.958907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:12.959097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:12.959110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:12.959117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:12.959124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:12.971223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:12.971633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:12.971649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:12.971657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.814 [2024-11-20 09:59:12.971820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:12.972005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:12.972015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:12.972022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:12.972029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:12.984062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:12.984476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:12.984493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:12.984500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.814 [2024-11-20 09:59:12.984663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:12.984828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:12.984838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:12.984844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:12.984850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:12.996972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:12.997393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:12.997411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:12.997418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.814 [2024-11-20 09:59:12.997581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:12.997744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:12.997754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:12.997761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:12.997771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:13.009839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:13.010245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:13.010290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:13.010315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.814 [2024-11-20 09:59:13.010735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:13.010900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:13.010910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:13.010916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:13.010922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:13.022771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:13.023172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:13.023189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:13.023197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.814 [2024-11-20 09:59:13.023360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:13.023524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:13.023534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:13.023540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:13.023546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:13.035604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:13.036023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:13.036040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:13.036048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.814 [2024-11-20 09:59:13.036211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:13.036376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:13.036386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:13.036392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:13.036398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:13.048630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:13.049077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:13.049123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:13.049148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.814 [2024-11-20 09:59:13.049607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.814 [2024-11-20 09:59:13.049771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.814 [2024-11-20 09:59:13.049781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.814 [2024-11-20 09:59:13.049788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.814 [2024-11-20 09:59:13.049795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.814 [2024-11-20 09:59:13.061599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.814 [2024-11-20 09:59:13.061989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.814 [2024-11-20 09:59:13.062005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.814 [2024-11-20 09:59:13.062013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.815 [2024-11-20 09:59:13.062176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.815 [2024-11-20 09:59:13.062340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.815 [2024-11-20 09:59:13.062348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.815 [2024-11-20 09:59:13.062355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.815 [2024-11-20 09:59:13.062361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.815 [2024-11-20 09:59:13.074420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.815 [2024-11-20 09:59:13.074765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-11-20 09:59:13.074812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-11-20 09:59:13.074837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.815 [2024-11-20 09:59:13.075366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.815 [2024-11-20 09:59:13.075541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.815 [2024-11-20 09:59:13.075551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.815 [2024-11-20 09:59:13.075558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.815 [2024-11-20 09:59:13.075565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.815 [2024-11-20 09:59:13.087325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.815 [2024-11-20 09:59:13.087770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-11-20 09:59:13.087788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-11-20 09:59:13.087796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.815 [2024-11-20 09:59:13.087968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.815 [2024-11-20 09:59:13.088158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.815 [2024-11-20 09:59:13.088168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.815 [2024-11-20 09:59:13.088175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.815 [2024-11-20 09:59:13.088182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.815 [2024-11-20 09:59:13.100504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.815 [2024-11-20 09:59:13.100985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-11-20 09:59:13.101031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-11-20 09:59:13.101055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.815 [2024-11-20 09:59:13.101596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.815 [2024-11-20 09:59:13.101774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.815 [2024-11-20 09:59:13.101785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.815 [2024-11-20 09:59:13.101792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.815 [2024-11-20 09:59:13.101799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.815 [2024-11-20 09:59:13.113488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.815 [2024-11-20 09:59:13.113867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-11-20 09:59:13.113885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-11-20 09:59:13.113893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.815 [2024-11-20 09:59:13.114069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.815 [2024-11-20 09:59:13.114242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.815 [2024-11-20 09:59:13.114252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.815 [2024-11-20 09:59:13.114259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.815 [2024-11-20 09:59:13.114265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.815 [2024-11-20 09:59:13.126443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.815 [2024-11-20 09:59:13.126798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-11-20 09:59:13.126816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-11-20 09:59:13.126824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.815 [2024-11-20 09:59:13.127001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.815 [2024-11-20 09:59:13.127173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.815 [2024-11-20 09:59:13.127186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.815 [2024-11-20 09:59:13.127193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.815 [2024-11-20 09:59:13.127199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:49.815 [2024-11-20 09:59:13.139544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:49.815 [2024-11-20 09:59:13.139996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.815 [2024-11-20 09:59:13.140042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:49.815 [2024-11-20 09:59:13.140066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:49.815 [2024-11-20 09:59:13.140522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:49.815 [2024-11-20 09:59:13.140696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:49.815 [2024-11-20 09:59:13.140706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:49.815 [2024-11-20 09:59:13.140713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:49.815 [2024-11-20 09:59:13.140720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.152472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.152823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.076 [2024-11-20 09:59:13.152866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.076 [2024-11-20 09:59:13.152890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.076 [2024-11-20 09:59:13.153484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.076 [2024-11-20 09:59:13.153956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.076 [2024-11-20 09:59:13.153967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.076 [2024-11-20 09:59:13.153973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.076 [2024-11-20 09:59:13.153980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.165330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.165728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.076 [2024-11-20 09:59:13.165745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.076 [2024-11-20 09:59:13.165753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.076 [2024-11-20 09:59:13.165915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.076 [2024-11-20 09:59:13.166107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.076 [2024-11-20 09:59:13.166116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.076 [2024-11-20 09:59:13.166123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.076 [2024-11-20 09:59:13.166133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.178263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.178683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.076 [2024-11-20 09:59:13.178727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.076 [2024-11-20 09:59:13.178751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.076 [2024-11-20 09:59:13.179345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.076 [2024-11-20 09:59:13.179934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.076 [2024-11-20 09:59:13.179960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.076 [2024-11-20 09:59:13.179976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.076 [2024-11-20 09:59:13.179990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.193155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.193605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.076 [2024-11-20 09:59:13.193626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.076 [2024-11-20 09:59:13.193637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.076 [2024-11-20 09:59:13.193891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.076 [2024-11-20 09:59:13.194152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.076 [2024-11-20 09:59:13.194166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.076 [2024-11-20 09:59:13.194177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.076 [2024-11-20 09:59:13.194187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.206022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.206426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.076 [2024-11-20 09:59:13.206470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.076 [2024-11-20 09:59:13.206494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.076 [2024-11-20 09:59:13.206974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.076 [2024-11-20 09:59:13.207144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.076 [2024-11-20 09:59:13.207154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.076 [2024-11-20 09:59:13.207161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.076 [2024-11-20 09:59:13.207167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.218864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.219300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.076 [2024-11-20 09:59:13.219344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.076 [2024-11-20 09:59:13.219368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.076 [2024-11-20 09:59:13.219834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.076 [2024-11-20 09:59:13.220021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.076 [2024-11-20 09:59:13.220032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.076 [2024-11-20 09:59:13.220039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.076 [2024-11-20 09:59:13.220046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.231712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.232132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.076 [2024-11-20 09:59:13.232177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.076 [2024-11-20 09:59:13.232201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.076 [2024-11-20 09:59:13.232749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.076 [2024-11-20 09:59:13.232914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.076 [2024-11-20 09:59:13.232923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.076 [2024-11-20 09:59:13.232929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.076 [2024-11-20 09:59:13.232936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.244625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.244963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.076 [2024-11-20 09:59:13.244980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.076 [2024-11-20 09:59:13.244988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.076 [2024-11-20 09:59:13.245150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.076 [2024-11-20 09:59:13.245315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.076 [2024-11-20 09:59:13.245324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.076 [2024-11-20 09:59:13.245331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.076 [2024-11-20 09:59:13.245338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.257783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.258176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.076 [2024-11-20 09:59:13.258221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.076 [2024-11-20 09:59:13.258245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.076 [2024-11-20 09:59:13.258832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.076 [2024-11-20 09:59:13.259334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.076 [2024-11-20 09:59:13.259345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.076 [2024-11-20 09:59:13.259353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.076 [2024-11-20 09:59:13.259360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.076 [2024-11-20 09:59:13.270643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.076 [2024-11-20 09:59:13.270983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.271001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.271009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.271172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.271336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.271346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.271352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.271359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.077 [2024-11-20 09:59:13.283633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.077 [2024-11-20 09:59:13.284051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.284095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.284120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.284699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.285299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.285310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.285317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.285323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.077 [2024-11-20 09:59:13.296586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.077 [2024-11-20 09:59:13.296934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.296990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.297014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.297454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.297628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.297642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.297649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.297656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.077 [2024-11-20 09:59:13.309496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.077 [2024-11-20 09:59:13.309891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.309908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.309916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.310092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.310275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.310285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.310293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.310300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.077 [2024-11-20 09:59:13.322536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.077 [2024-11-20 09:59:13.322821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.322839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.322847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.323015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.323178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.323187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.323193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.323200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.077 [2024-11-20 09:59:13.335585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.077 [2024-11-20 09:59:13.335869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.335886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.335895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.336076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.336256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.336266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.336274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.336285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.077 [2024-11-20 09:59:13.348611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.077 [2024-11-20 09:59:13.348952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.348970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.348978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.349151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.349325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.349335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.349343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.349350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.077 [2024-11-20 09:59:13.361677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.077 [2024-11-20 09:59:13.362035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.362053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.362061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.362238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.362415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.362426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.362433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.362441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.077 [2024-11-20 09:59:13.374784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.077 [2024-11-20 09:59:13.375220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.375238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.375247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.375425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.375604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.375615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.375622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.375629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.077 [2024-11-20 09:59:13.387978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.077 [2024-11-20 09:59:13.388326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.077 [2024-11-20 09:59:13.388348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.077 [2024-11-20 09:59:13.388356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.077 [2024-11-20 09:59:13.388534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.077 [2024-11-20 09:59:13.388713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.077 [2024-11-20 09:59:13.388724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.077 [2024-11-20 09:59:13.388730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.077 [2024-11-20 09:59:13.388737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.078 [2024-11-20 09:59:13.401076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.078 [2024-11-20 09:59:13.401479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.078 [2024-11-20 09:59:13.401497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.078 [2024-11-20 09:59:13.401505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.078 [2024-11-20 09:59:13.401681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.078 [2024-11-20 09:59:13.401859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.078 [2024-11-20 09:59:13.401868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.078 [2024-11-20 09:59:13.401875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.078 [2024-11-20 09:59:13.401882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.337 [2024-11-20 09:59:13.414216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.338 [2024-11-20 09:59:13.414583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.338 [2024-11-20 09:59:13.414600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.338 [2024-11-20 09:59:13.414609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.338 [2024-11-20 09:59:13.414787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.338 [2024-11-20 09:59:13.414970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.338 [2024-11-20 09:59:13.414981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.338 [2024-11-20 09:59:13.414989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.338 [2024-11-20 09:59:13.414996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.338 [2024-11-20 09:59:13.427334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.338 [2024-11-20 09:59:13.427768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.338 [2024-11-20 09:59:13.427785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.338 [2024-11-20 09:59:13.427794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.338 [2024-11-20 09:59:13.427979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.338 [2024-11-20 09:59:13.428158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.338 [2024-11-20 09:59:13.428168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.338 [2024-11-20 09:59:13.428175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.338 [2024-11-20 09:59:13.428182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.338 [2024-11-20 09:59:13.440501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.338 [2024-11-20 09:59:13.440828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.338 [2024-11-20 09:59:13.440845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.338 [2024-11-20 09:59:13.440854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.338 [2024-11-20 09:59:13.441035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.338 [2024-11-20 09:59:13.441216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.338 [2024-11-20 09:59:13.441226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.338 [2024-11-20 09:59:13.441232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.338 [2024-11-20 09:59:13.441239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.338 [2024-11-20 09:59:13.453602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.338 [2024-11-20 09:59:13.453963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.338 [2024-11-20 09:59:13.453981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.338 [2024-11-20 09:59:13.453990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.338 [2024-11-20 09:59:13.454167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.338 [2024-11-20 09:59:13.454345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.338 [2024-11-20 09:59:13.454356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.338 [2024-11-20 09:59:13.454363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.338 [2024-11-20 09:59:13.454370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.338 [2024-11-20 09:59:13.466701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.338 [2024-11-20 09:59:13.467131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.338 [2024-11-20 09:59:13.467149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.338 [2024-11-20 09:59:13.467157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.338 [2024-11-20 09:59:13.467334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.338 [2024-11-20 09:59:13.467514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.338 [2024-11-20 09:59:13.467527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.338 [2024-11-20 09:59:13.467536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.338 [2024-11-20 09:59:13.467543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.338 [2024-11-20 09:59:13.479882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.338 [2024-11-20 09:59:13.480318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.338 [2024-11-20 09:59:13.480337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.338 [2024-11-20 09:59:13.480345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.338 [2024-11-20 09:59:13.480523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.338 [2024-11-20 09:59:13.480701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.338 [2024-11-20 09:59:13.480712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.338 [2024-11-20 09:59:13.480718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.338 [2024-11-20 09:59:13.480725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.338 7124.75 IOPS, 27.83 MiB/s [2024-11-20T08:59:13.670Z] [2024-11-20 09:59:13.494050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.338 [2024-11-20 09:59:13.494487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.338 [2024-11-20 09:59:13.494505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.338 [2024-11-20 09:59:13.494513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.338 [2024-11-20 09:59:13.494691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.338 [2024-11-20 09:59:13.494870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.338 [2024-11-20 09:59:13.494880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.338 [2024-11-20 09:59:13.494888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.338 [2024-11-20 09:59:13.494895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.338 [2024-11-20 09:59:13.507229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.338 [2024-11-20 09:59:13.507640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.338 [2024-11-20 09:59:13.507657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.338 [2024-11-20 09:59:13.507666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.338 [2024-11-20 09:59:13.507842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.338 [2024-11-20 09:59:13.508028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.338 [2024-11-20 09:59:13.508039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.338 [2024-11-20 09:59:13.508046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.338 [2024-11-20 09:59:13.508053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.338 [2024-11-20 09:59:13.520380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.338 [2024-11-20 09:59:13.520659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.338 [2024-11-20 09:59:13.520677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.338 [2024-11-20 09:59:13.520685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.339 [2024-11-20 09:59:13.520864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.339 [2024-11-20 09:59:13.521049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.339 [2024-11-20 09:59:13.521059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.339 [2024-11-20 09:59:13.521066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.339 [2024-11-20 09:59:13.521073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.339 [2024-11-20 09:59:13.533576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.339 [2024-11-20 09:59:13.534016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-11-20 09:59:13.534034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-11-20 09:59:13.534042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.339 [2024-11-20 09:59:13.534220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.339 [2024-11-20 09:59:13.534400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.339 [2024-11-20 09:59:13.534410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.339 [2024-11-20 09:59:13.534417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.339 [2024-11-20 09:59:13.534426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.339 [2024-11-20 09:59:13.546769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.339 [2024-11-20 09:59:13.547183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-11-20 09:59:13.547202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-11-20 09:59:13.547210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.339 [2024-11-20 09:59:13.547387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.339 [2024-11-20 09:59:13.547567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.339 [2024-11-20 09:59:13.547577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.339 [2024-11-20 09:59:13.547585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.339 [2024-11-20 09:59:13.547593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.339 [2024-11-20 09:59:13.559919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.339 [2024-11-20 09:59:13.560351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-11-20 09:59:13.560373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-11-20 09:59:13.560381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.339 [2024-11-20 09:59:13.560559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.339 [2024-11-20 09:59:13.560740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.339 [2024-11-20 09:59:13.560750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.339 [2024-11-20 09:59:13.560757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.339 [2024-11-20 09:59:13.560765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.339 [2024-11-20 09:59:13.573106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.339 [2024-11-20 09:59:13.573535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-11-20 09:59:13.573552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-11-20 09:59:13.573561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.339 [2024-11-20 09:59:13.573738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.339 [2024-11-20 09:59:13.573916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.339 [2024-11-20 09:59:13.573926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.339 [2024-11-20 09:59:13.573934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.339 [2024-11-20 09:59:13.573941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.339 [2024-11-20 09:59:13.586289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.339 [2024-11-20 09:59:13.586721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-11-20 09:59:13.586739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-11-20 09:59:13.586747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.339 [2024-11-20 09:59:13.586924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.339 [2024-11-20 09:59:13.587108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.339 [2024-11-20 09:59:13.587119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.339 [2024-11-20 09:59:13.587126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.339 [2024-11-20 09:59:13.587133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.339 [2024-11-20 09:59:13.599468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.339 [2024-11-20 09:59:13.599891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-11-20 09:59:13.599908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-11-20 09:59:13.599917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.339 [2024-11-20 09:59:13.600105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.339 [2024-11-20 09:59:13.600284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.339 [2024-11-20 09:59:13.600295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.339 [2024-11-20 09:59:13.600301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.339 [2024-11-20 09:59:13.600308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.339 [2024-11-20 09:59:13.612634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.339 [2024-11-20 09:59:13.613096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-11-20 09:59:13.613114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-11-20 09:59:13.613123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.339 [2024-11-20 09:59:13.613300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.339 [2024-11-20 09:59:13.613479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.339 [2024-11-20 09:59:13.613489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.339 [2024-11-20 09:59:13.613496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.339 [2024-11-20 09:59:13.613503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.339 [2024-11-20 09:59:13.625694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.339 [2024-11-20 09:59:13.626112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.339 [2024-11-20 09:59:13.626131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.339 [2024-11-20 09:59:13.626139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.339 [2024-11-20 09:59:13.626317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.339 [2024-11-20 09:59:13.626496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.339 [2024-11-20 09:59:13.626507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.339 [2024-11-20 09:59:13.626515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.339 [2024-11-20 09:59:13.626522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.340 [2024-11-20 09:59:13.638854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.340 [2024-11-20 09:59:13.639276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-11-20 09:59:13.639294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-11-20 09:59:13.639303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.340 [2024-11-20 09:59:13.639481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.340 [2024-11-20 09:59:13.639661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.340 [2024-11-20 09:59:13.639672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.340 [2024-11-20 09:59:13.639682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.340 [2024-11-20 09:59:13.639691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.340 [2024-11-20 09:59:13.652021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.340 [2024-11-20 09:59:13.652451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-11-20 09:59:13.652469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-11-20 09:59:13.652477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.340 [2024-11-20 09:59:13.652655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.340 [2024-11-20 09:59:13.652834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.340 [2024-11-20 09:59:13.652844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.340 [2024-11-20 09:59:13.652852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.340 [2024-11-20 09:59:13.652859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.340 [2024-11-20 09:59:13.665233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.340 [2024-11-20 09:59:13.665689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.340 [2024-11-20 09:59:13.665707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.340 [2024-11-20 09:59:13.665715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.340 [2024-11-20 09:59:13.665898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.340 [2024-11-20 09:59:13.666087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.340 [2024-11-20 09:59:13.666098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.340 [2024-11-20 09:59:13.666106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.340 [2024-11-20 09:59:13.666113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.600 [2024-11-20 09:59:13.678415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.600 [2024-11-20 09:59:13.678758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.600 [2024-11-20 09:59:13.678775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.600 [2024-11-20 09:59:13.678783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.600 [2024-11-20 09:59:13.678967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.600 [2024-11-20 09:59:13.679147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.600 [2024-11-20 09:59:13.679158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.600 [2024-11-20 09:59:13.679165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.679171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.691519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.691847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.691865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.691873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.692055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.601 [2024-11-20 09:59:13.692233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.601 [2024-11-20 09:59:13.692244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.601 [2024-11-20 09:59:13.692251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.692258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.704597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.704959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.704978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.704986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.705164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.601 [2024-11-20 09:59:13.705343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.601 [2024-11-20 09:59:13.705353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.601 [2024-11-20 09:59:13.705360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.705367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.717745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.718179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.718197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.718206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.718383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.601 [2024-11-20 09:59:13.718563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.601 [2024-11-20 09:59:13.718573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.601 [2024-11-20 09:59:13.718580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.718587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.730758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.731194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.731215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.731224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.731395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.601 [2024-11-20 09:59:13.731569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.601 [2024-11-20 09:59:13.731579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.601 [2024-11-20 09:59:13.731586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.731594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.743688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.744012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.744030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.744037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.744201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.601 [2024-11-20 09:59:13.744365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.601 [2024-11-20 09:59:13.744375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.601 [2024-11-20 09:59:13.744381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.744387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.756659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.757053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.757070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.757078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.757241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.601 [2024-11-20 09:59:13.757405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.601 [2024-11-20 09:59:13.757415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.601 [2024-11-20 09:59:13.757421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.757428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.769475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.769883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.769900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.769908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.770097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.601 [2024-11-20 09:59:13.770274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.601 [2024-11-20 09:59:13.770284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.601 [2024-11-20 09:59:13.770292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.770298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.782272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.782610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.782627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.782635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.782797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.601 [2024-11-20 09:59:13.782967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.601 [2024-11-20 09:59:13.782977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.601 [2024-11-20 09:59:13.782984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.782990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.795094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.795409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.795426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.795434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.795596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.601 [2024-11-20 09:59:13.795759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.601 [2024-11-20 09:59:13.795769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.601 [2024-11-20 09:59:13.795776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.601 [2024-11-20 09:59:13.795782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.601 [2024-11-20 09:59:13.808023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.601 [2024-11-20 09:59:13.808440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.601 [2024-11-20 09:59:13.808483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.601 [2024-11-20 09:59:13.808508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.601 [2024-11-20 09:59:13.809103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.809621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.809631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.809641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.809649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.602 [2024-11-20 09:59:13.820919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.602 [2024-11-20 09:59:13.821340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.602 [2024-11-20 09:59:13.821387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.602 [2024-11-20 09:59:13.821412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.602 [2024-11-20 09:59:13.822004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.822478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.822488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.822495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.822503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.602 [2024-11-20 09:59:13.833845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.602 [2024-11-20 09:59:13.834262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.602 [2024-11-20 09:59:13.834278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.602 [2024-11-20 09:59:13.834286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.602 [2024-11-20 09:59:13.834450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.834612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.834622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.834629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.834635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.602 [2024-11-20 09:59:13.846672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.602 [2024-11-20 09:59:13.847081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.602 [2024-11-20 09:59:13.847113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.602 [2024-11-20 09:59:13.847138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.602 [2024-11-20 09:59:13.847677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.847841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.847851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.847857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.847864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.602 [2024-11-20 09:59:13.859531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.602 [2024-11-20 09:59:13.859985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.602 [2024-11-20 09:59:13.860029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.602 [2024-11-20 09:59:13.860053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.602 [2024-11-20 09:59:13.860532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.860698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.860708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.860715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.860722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.602 [2024-11-20 09:59:13.872755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.602 [2024-11-20 09:59:13.873220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.602 [2024-11-20 09:59:13.873238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.602 [2024-11-20 09:59:13.873246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.602 [2024-11-20 09:59:13.873423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.873604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.873615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.873623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.873630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.602 [2024-11-20 09:59:13.885560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.602 [2024-11-20 09:59:13.885958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.602 [2024-11-20 09:59:13.885976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.602 [2024-11-20 09:59:13.885984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.602 [2024-11-20 09:59:13.886148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.886312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.886322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.886329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.886335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.602 [2024-11-20 09:59:13.898444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.602 [2024-11-20 09:59:13.898768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.602 [2024-11-20 09:59:13.898785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.602 [2024-11-20 09:59:13.898795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.602 [2024-11-20 09:59:13.898965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.899152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.899162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.899169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.899175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.602 [2024-11-20 09:59:13.911365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.602 [2024-11-20 09:59:13.911794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.602 [2024-11-20 09:59:13.911840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.602 [2024-11-20 09:59:13.911863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.602 [2024-11-20 09:59:13.912458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.912915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.912925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.912932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.912938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.602 [2024-11-20 09:59:13.924267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.602 [2024-11-20 09:59:13.924685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.602 [2024-11-20 09:59:13.924702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.602 [2024-11-20 09:59:13.924710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.602 [2024-11-20 09:59:13.924872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.602 [2024-11-20 09:59:13.925062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.602 [2024-11-20 09:59:13.925073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.602 [2024-11-20 09:59:13.925080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.602 [2024-11-20 09:59:13.925086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.864 [2024-11-20 09:59:13.937296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.864 [2024-11-20 09:59:13.937621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.864 [2024-11-20 09:59:13.937637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.864 [2024-11-20 09:59:13.937645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.864 [2024-11-20 09:59:13.937807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.864 [2024-11-20 09:59:13.937995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.864 [2024-11-20 09:59:13.938006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.864 [2024-11-20 09:59:13.938014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.864 [2024-11-20 09:59:13.938021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.864 [2024-11-20 09:59:13.950123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.864 [2024-11-20 09:59:13.950434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.864 [2024-11-20 09:59:13.950477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.864 [2024-11-20 09:59:13.950502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.864 [2024-11-20 09:59:13.951095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.864 [2024-11-20 09:59:13.951680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.864 [2024-11-20 09:59:13.951705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.864 [2024-11-20 09:59:13.951727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.864 [2024-11-20 09:59:13.951747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.864 [2024-11-20 09:59:13.963014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.864 [2024-11-20 09:59:13.963430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.864 [2024-11-20 09:59:13.963447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.864 [2024-11-20 09:59:13.963454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.864 [2024-11-20 09:59:13.963617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.864 [2024-11-20 09:59:13.963780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.864 [2024-11-20 09:59:13.963790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.864 [2024-11-20 09:59:13.963797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.864 [2024-11-20 09:59:13.963803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.864 [2024-11-20 09:59:13.975908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.864 [2024-11-20 09:59:13.976325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.864 [2024-11-20 09:59:13.976371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.864 [2024-11-20 09:59:13.976396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.864 [2024-11-20 09:59:13.976906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.864 [2024-11-20 09:59:13.977095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.864 [2024-11-20 09:59:13.977113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.864 [2024-11-20 09:59:13.977123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.864 [2024-11-20 09:59:13.977130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.864 [2024-11-20 09:59:13.988903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.864 [2024-11-20 09:59:13.989250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.864 [2024-11-20 09:59:13.989267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.864 [2024-11-20 09:59:13.989276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.864 [2024-11-20 09:59:13.989439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.864 [2024-11-20 09:59:13.989602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.864 [2024-11-20 09:59:13.989612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.864 [2024-11-20 09:59:13.989618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.864 [2024-11-20 09:59:13.989626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.864 [2024-11-20 09:59:14.001823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.864 [2024-11-20 09:59:14.002248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.864 [2024-11-20 09:59:14.002297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.864 [2024-11-20 09:59:14.002323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.864 [2024-11-20 09:59:14.002835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.864 [2024-11-20 09:59:14.003024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.864 [2024-11-20 09:59:14.003033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.864 [2024-11-20 09:59:14.003040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.864 [2024-11-20 09:59:14.003047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.864 [2024-11-20 09:59:14.014751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.864 [2024-11-20 09:59:14.015175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.015222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.015246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.015726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.015890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.015900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.015907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.015914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.865 [2024-11-20 09:59:14.027641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.865 [2024-11-20 09:59:14.028049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.028096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.028121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.028697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.028893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.028902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.028908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.028915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.865 [2024-11-20 09:59:14.040558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.865 [2024-11-20 09:59:14.040988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.041034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.041059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.041637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.041900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.041910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.041917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.041922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.865 [2024-11-20 09:59:14.053470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.865 [2024-11-20 09:59:14.053886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.053903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.053911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.054104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.054278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.054287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.054295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.054301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.865 [2024-11-20 09:59:14.066377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.865 [2024-11-20 09:59:14.066784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.066801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.066813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.067000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.067174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.067184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.067191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.067198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.865 [2024-11-20 09:59:14.079214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.865 [2024-11-20 09:59:14.079619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.079665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.079689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.080198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.080486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.080505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.080520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.080534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.865 [2024-11-20 09:59:14.094185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.865 [2024-11-20 09:59:14.094706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.094756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.094781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.095332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.095588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.095601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.095612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.095622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.865 [2024-11-20 09:59:14.107218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.865 [2024-11-20 09:59:14.107633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.107650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.107658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.107826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.108021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.108031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.108039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.108046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.865 [2024-11-20 09:59:14.120252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.865 [2024-11-20 09:59:14.120721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.120764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.120789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.121238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.121413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.121424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.121431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.121438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.865 [2024-11-20 09:59:14.133341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.865 [2024-11-20 09:59:14.133775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.865 [2024-11-20 09:59:14.133793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.865 [2024-11-20 09:59:14.133802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.865 [2024-11-20 09:59:14.133985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.865 [2024-11-20 09:59:14.134166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.865 [2024-11-20 09:59:14.134177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.865 [2024-11-20 09:59:14.134184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.865 [2024-11-20 09:59:14.134191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.866 [2024-11-20 09:59:14.146535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.866 [2024-11-20 09:59:14.146920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.866 [2024-11-20 09:59:14.146938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.866 [2024-11-20 09:59:14.146951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.866 [2024-11-20 09:59:14.147129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.866 [2024-11-20 09:59:14.147308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.866 [2024-11-20 09:59:14.147317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.866 [2024-11-20 09:59:14.147329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.866 [2024-11-20 09:59:14.147337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.866 [2024-11-20 09:59:14.159711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.866 [2024-11-20 09:59:14.160074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.866 [2024-11-20 09:59:14.160092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.866 [2024-11-20 09:59:14.160101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.866 [2024-11-20 09:59:14.160279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.866 [2024-11-20 09:59:14.160459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.866 [2024-11-20 09:59:14.160469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.866 [2024-11-20 09:59:14.160476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.866 [2024-11-20 09:59:14.160483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.866 [2024-11-20 09:59:14.172750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.866 [2024-11-20 09:59:14.173041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.866 [2024-11-20 09:59:14.173059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.866 [2024-11-20 09:59:14.173067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.866 [2024-11-20 09:59:14.173239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.866 [2024-11-20 09:59:14.173411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.866 [2024-11-20 09:59:14.173421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.866 [2024-11-20 09:59:14.173428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.866 [2024-11-20 09:59:14.173435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:50.866 [2024-11-20 09:59:14.185673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:50.866 [2024-11-20 09:59:14.186021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.866 [2024-11-20 09:59:14.186039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:50.866 [2024-11-20 09:59:14.186047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:50.866 [2024-11-20 09:59:14.186225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:50.866 [2024-11-20 09:59:14.186389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:50.866 [2024-11-20 09:59:14.186398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:50.866 [2024-11-20 09:59:14.186406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:50.866 [2024-11-20 09:59:14.186413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.127 [2024-11-20 09:59:14.198791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.127 [2024-11-20 09:59:14.199150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.127 [2024-11-20 09:59:14.199167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.127 [2024-11-20 09:59:14.199176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.127 [2024-11-20 09:59:14.199348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.127 [2024-11-20 09:59:14.199521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.127 [2024-11-20 09:59:14.199531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.127 [2024-11-20 09:59:14.199538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.127 [2024-11-20 09:59:14.199545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.127 [2024-11-20 09:59:14.211735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.127 [2024-11-20 09:59:14.212166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.127 [2024-11-20 09:59:14.212184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.127 [2024-11-20 09:59:14.212192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.127 [2024-11-20 09:59:14.212365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.127 [2024-11-20 09:59:14.212539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.127 [2024-11-20 09:59:14.212548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.127 [2024-11-20 09:59:14.212555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.127 [2024-11-20 09:59:14.212562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.127 [2024-11-20 09:59:14.224707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.127 [2024-11-20 09:59:14.225155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.127 [2024-11-20 09:59:14.225172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.127 [2024-11-20 09:59:14.225181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.127 [2024-11-20 09:59:14.225357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.127 [2024-11-20 09:59:14.225521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.127 [2024-11-20 09:59:14.225531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.127 [2024-11-20 09:59:14.225537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.127 [2024-11-20 09:59:14.225543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.127 [2024-11-20 09:59:14.237515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.127 [2024-11-20 09:59:14.237913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.127 [2024-11-20 09:59:14.237967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.127 [2024-11-20 09:59:14.238000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.127 [2024-11-20 09:59:14.238420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.127 [2024-11-20 09:59:14.238585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.127 [2024-11-20 09:59:14.238594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.127 [2024-11-20 09:59:14.238600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.127 [2024-11-20 09:59:14.238607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.127 [2024-11-20 09:59:14.250404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.127 [2024-11-20 09:59:14.250824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.127 [2024-11-20 09:59:14.250874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.127 [2024-11-20 09:59:14.250899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.127 [2024-11-20 09:59:14.251475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.127 [2024-11-20 09:59:14.251650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.127 [2024-11-20 09:59:14.251660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.127 [2024-11-20 09:59:14.251666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.127 [2024-11-20 09:59:14.251674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.127 [2024-11-20 09:59:14.263188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.127 [2024-11-20 09:59:14.263550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.127 [2024-11-20 09:59:14.263594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.263617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.264210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.264741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.264750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.264757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.264764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.276033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.276450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.128 [2024-11-20 09:59:14.276496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.276521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.277115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.277597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.277606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.277614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.277620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.288953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.289369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.128 [2024-11-20 09:59:14.289387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.289395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.289557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.289720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.289730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.289736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.289742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.301734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.302155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.128 [2024-11-20 09:59:14.302199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.302223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.302758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.303158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.303177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.303193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.303207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.316409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.316920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.128 [2024-11-20 09:59:14.316978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.317004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.317576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.317831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.317845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.317855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.317870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.329322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.329741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.128 [2024-11-20 09:59:14.329758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.329766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.329934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.330129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.330140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.330146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.330153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.342113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.342524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.128 [2024-11-20 09:59:14.342541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.342548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.342711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.342874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.342884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.342890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.342897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.354917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.355329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.128 [2024-11-20 09:59:14.355362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.355370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.355543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.355717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.355728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.355736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.355742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.367734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.368143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.128 [2024-11-20 09:59:14.368160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.368168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.368331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.368496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.368505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.368512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.368519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.380611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.381048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.128 [2024-11-20 09:59:14.381066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.128 [2024-11-20 09:59:14.381074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.128 [2024-11-20 09:59:14.381238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.128 [2024-11-20 09:59:14.381403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.128 [2024-11-20 09:59:14.381413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.128 [2024-11-20 09:59:14.381420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.128 [2024-11-20 09:59:14.381426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.128 [2024-11-20 09:59:14.393659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.128 [2024-11-20 09:59:14.394065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.129 [2024-11-20 09:59:14.394083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.129 [2024-11-20 09:59:14.394092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.129 [2024-11-20 09:59:14.394278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.129 [2024-11-20 09:59:14.394462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.129 [2024-11-20 09:59:14.394471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.129 [2024-11-20 09:59:14.394478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.129 [2024-11-20 09:59:14.394484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.129 [2024-11-20 09:59:14.406701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.129 [2024-11-20 09:59:14.407121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.129 [2024-11-20 09:59:14.407166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.129 [2024-11-20 09:59:14.407190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.129 [2024-11-20 09:59:14.407634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.129 [2024-11-20 09:59:14.407799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.129 [2024-11-20 09:59:14.407809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.129 [2024-11-20 09:59:14.407815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.129 [2024-11-20 09:59:14.407822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.129 [2024-11-20 09:59:14.419495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.129 [2024-11-20 09:59:14.419919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.129 [2024-11-20 09:59:14.419975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.129 [2024-11-20 09:59:14.420000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.129 [2024-11-20 09:59:14.420580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.129 [2024-11-20 09:59:14.421134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.129 [2024-11-20 09:59:14.421142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.129 [2024-11-20 09:59:14.421149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.129 [2024-11-20 09:59:14.421155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.129 [2024-11-20 09:59:14.432298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.129 [2024-11-20 09:59:14.432689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.129 [2024-11-20 09:59:14.432706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.129 [2024-11-20 09:59:14.432714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.129 [2024-11-20 09:59:14.432876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.129 [2024-11-20 09:59:14.433063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.129 [2024-11-20 09:59:14.433075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.129 [2024-11-20 09:59:14.433082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.129 [2024-11-20 09:59:14.433089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.129 [2024-11-20 09:59:14.445158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.129 [2024-11-20 09:59:14.445563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.129 [2024-11-20 09:59:14.445607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.129 [2024-11-20 09:59:14.445632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.129 [2024-11-20 09:59:14.446135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.129 [2024-11-20 09:59:14.446310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.129 [2024-11-20 09:59:14.446323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.129 [2024-11-20 09:59:14.446330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.129 [2024-11-20 09:59:14.446338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.390 [2024-11-20 09:59:14.458165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.390 [2024-11-20 09:59:14.458451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.390 [2024-11-20 09:59:14.458467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.390 [2024-11-20 09:59:14.458475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.390 [2024-11-20 09:59:14.458637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.390 [2024-11-20 09:59:14.458800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.390 [2024-11-20 09:59:14.458810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.390 [2024-11-20 09:59:14.458817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.390 [2024-11-20 09:59:14.458823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.390 [2024-11-20 09:59:14.470960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.390 [2024-11-20 09:59:14.471382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.390 [2024-11-20 09:59:14.471429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.390 [2024-11-20 09:59:14.471455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.390 [2024-11-20 09:59:14.471992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.390 [2024-11-20 09:59:14.472165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.390 [2024-11-20 09:59:14.472175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.390 [2024-11-20 09:59:14.472182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.390 [2024-11-20 09:59:14.472189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.390 [2024-11-20 09:59:14.483791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.390 [2024-11-20 09:59:14.484161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.390 [2024-11-20 09:59:14.484206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.390 [2024-11-20 09:59:14.484230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.484810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.485287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.485298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.485305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.485319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 5699.80 IOPS, 22.26 MiB/s [2024-11-20T08:59:14.723Z] [2024-11-20 09:59:14.496647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.391 [2024-11-20 09:59:14.497063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.391 [2024-11-20 09:59:14.497081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.391 [2024-11-20 09:59:14.497089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.497253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.497417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.497427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.497434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.497440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 [2024-11-20 09:59:14.509438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.391 [2024-11-20 09:59:14.509846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.391 [2024-11-20 09:59:14.509863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.391 [2024-11-20 09:59:14.509871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.510041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.510205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.510215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.510221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.510228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 [2024-11-20 09:59:14.522262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.391 [2024-11-20 09:59:14.522653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.391 [2024-11-20 09:59:14.522670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.391 [2024-11-20 09:59:14.522677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.522839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.523026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.523036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.523043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.523050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 [2024-11-20 09:59:14.535106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.391 [2024-11-20 09:59:14.535510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.391 [2024-11-20 09:59:14.535554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.391 [2024-11-20 09:59:14.535578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.536169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.536524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.536534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.536540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.536547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 [2024-11-20 09:59:14.548209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.391 [2024-11-20 09:59:14.548571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.391 [2024-11-20 09:59:14.548589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.391 [2024-11-20 09:59:14.548597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.548774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.548960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.548972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.548980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.548988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 [2024-11-20 09:59:14.561310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.391 [2024-11-20 09:59:14.561671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.391 [2024-11-20 09:59:14.561689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.391 [2024-11-20 09:59:14.561698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.561871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.562049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.562060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.562067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.562074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 [2024-11-20 09:59:14.574415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.391 [2024-11-20 09:59:14.574755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.391 [2024-11-20 09:59:14.574772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.391 [2024-11-20 09:59:14.574780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.574953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.575142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.575152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.575158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.575165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 [2024-11-20 09:59:14.587286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.391 [2024-11-20 09:59:14.587700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.391 [2024-11-20 09:59:14.587717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.391 [2024-11-20 09:59:14.587725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.587887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.588063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.588074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.588081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.588087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 [2024-11-20 09:59:14.600134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.391 [2024-11-20 09:59:14.600456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.391 [2024-11-20 09:59:14.600473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.391 [2024-11-20 09:59:14.600480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.391 [2024-11-20 09:59:14.600643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.391 [2024-11-20 09:59:14.600805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.391 [2024-11-20 09:59:14.600815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.391 [2024-11-20 09:59:14.600822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.391 [2024-11-20 09:59:14.600828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.391 [2024-11-20 09:59:14.612924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.392 [2024-11-20 09:59:14.613348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.392 [2024-11-20 09:59:14.613365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.392 [2024-11-20 09:59:14.613372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.392 [2024-11-20 09:59:14.613535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.392 [2024-11-20 09:59:14.613699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.392 [2024-11-20 09:59:14.613711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.392 [2024-11-20 09:59:14.613718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.392 [2024-11-20 09:59:14.613724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.392 [2024-11-20 09:59:14.625816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.392 [2024-11-20 09:59:14.626238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.392 [2024-11-20 09:59:14.626256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.392 [2024-11-20 09:59:14.626264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.392 [2024-11-20 09:59:14.626428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.392 [2024-11-20 09:59:14.626591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.392 [2024-11-20 09:59:14.626601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.392 [2024-11-20 09:59:14.626608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.392 [2024-11-20 09:59:14.626614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.392 [2024-11-20 09:59:14.638688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.392 [2024-11-20 09:59:14.639120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.392 [2024-11-20 09:59:14.639138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.392 [2024-11-20 09:59:14.639145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.392 [2024-11-20 09:59:14.639308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.392 [2024-11-20 09:59:14.639470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.392 [2024-11-20 09:59:14.639481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.392 [2024-11-20 09:59:14.639487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.392 [2024-11-20 09:59:14.639494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.392 [2024-11-20 09:59:14.651830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.392 [2024-11-20 09:59:14.652262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.392 [2024-11-20 09:59:14.652281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.392 [2024-11-20 09:59:14.652289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.392 [2024-11-20 09:59:14.652467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.392 [2024-11-20 09:59:14.652648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.392 [2024-11-20 09:59:14.652658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.392 [2024-11-20 09:59:14.652665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.392 [2024-11-20 09:59:14.652675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.392 [2024-11-20 09:59:14.664822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.392 [2024-11-20 09:59:14.665243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.392 [2024-11-20 09:59:14.665261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.392 [2024-11-20 09:59:14.665268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.392 [2024-11-20 09:59:14.665431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.392 [2024-11-20 09:59:14.665595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.392 [2024-11-20 09:59:14.665605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.392 [2024-11-20 09:59:14.665611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.392 [2024-11-20 09:59:14.665618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.392 [2024-11-20 09:59:14.677744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.392 [2024-11-20 09:59:14.678172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.392 [2024-11-20 09:59:14.678189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.392 [2024-11-20 09:59:14.678197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.392 [2024-11-20 09:59:14.678360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.392 [2024-11-20 09:59:14.678524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.392 [2024-11-20 09:59:14.678534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.392 [2024-11-20 09:59:14.678540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.392 [2024-11-20 09:59:14.678547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.392 [2024-11-20 09:59:14.690661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.392 [2024-11-20 09:59:14.690952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.392 [2024-11-20 09:59:14.690970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.392 [2024-11-20 09:59:14.690978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.392 [2024-11-20 09:59:14.691140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.392 [2024-11-20 09:59:14.691304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.392 [2024-11-20 09:59:14.691313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.392 [2024-11-20 09:59:14.691320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.392 [2024-11-20 09:59:14.691326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.392 [2024-11-20 09:59:14.703560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.392 [2024-11-20 09:59:14.703997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.392 [2024-11-20 09:59:14.704043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.392 [2024-11-20 09:59:14.704068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.392 [2024-11-20 09:59:14.704562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.392 [2024-11-20 09:59:14.704747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.392 [2024-11-20 09:59:14.704756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.392 [2024-11-20 09:59:14.704762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.392 [2024-11-20 09:59:14.704769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.392 [2024-11-20 09:59:14.716618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.392 [2024-11-20 09:59:14.717027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.392 [2024-11-20 09:59:14.717045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.392 [2024-11-20 09:59:14.717054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.392 [2024-11-20 09:59:14.717231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.393 [2024-11-20 09:59:14.717410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.393 [2024-11-20 09:59:14.717419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.393 [2024-11-20 09:59:14.717426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.393 [2024-11-20 09:59:14.717433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.653 [2024-11-20 09:59:14.729760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.653 [2024-11-20 09:59:14.730177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.653 [2024-11-20 09:59:14.730195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.653 [2024-11-20 09:59:14.730204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.653 [2024-11-20 09:59:14.730382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.653 [2024-11-20 09:59:14.730559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.653 [2024-11-20 09:59:14.730570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.653 [2024-11-20 09:59:14.730577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.653 [2024-11-20 09:59:14.730584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.653 [2024-11-20 09:59:14.742926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.653 [2024-11-20 09:59:14.743360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.653 [2024-11-20 09:59:14.743380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.653 [2024-11-20 09:59:14.743389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.653 [2024-11-20 09:59:14.743572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.653 [2024-11-20 09:59:14.743752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.653 [2024-11-20 09:59:14.743765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.653 [2024-11-20 09:59:14.743774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.653 [2024-11-20 09:59:14.743782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.653 [2024-11-20 09:59:14.756133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.653 [2024-11-20 09:59:14.756459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.653 [2024-11-20 09:59:14.756478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.653 [2024-11-20 09:59:14.756486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.653 [2024-11-20 09:59:14.756663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.653 [2024-11-20 09:59:14.756843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.653 [2024-11-20 09:59:14.756854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.653 [2024-11-20 09:59:14.756862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.653 [2024-11-20 09:59:14.756871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.653 [2024-11-20 09:59:14.769195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.653 [2024-11-20 09:59:14.769543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.653 [2024-11-20 09:59:14.769587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.653 [2024-11-20 09:59:14.769611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.653 [2024-11-20 09:59:14.769886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.653 [2024-11-20 09:59:14.770070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.653 [2024-11-20 09:59:14.770081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.653 [2024-11-20 09:59:14.770088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.653 [2024-11-20 09:59:14.770095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.653 [2024-11-20 09:59:14.782209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.653 [2024-11-20 09:59:14.782664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.653 [2024-11-20 09:59:14.782710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.653 [2024-11-20 09:59:14.782735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.653 [2024-11-20 09:59:14.783328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.653 [2024-11-20 09:59:14.783842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.653 [2024-11-20 09:59:14.783855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.653 [2024-11-20 09:59:14.783862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.653 [2024-11-20 09:59:14.783869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.653 [2024-11-20 09:59:14.795232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.653 [2024-11-20 09:59:14.795525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.653 [2024-11-20 09:59:14.795570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.653 [2024-11-20 09:59:14.795594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.653 [2024-11-20 09:59:14.796184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.653 [2024-11-20 09:59:14.796772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.796799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.796820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.796826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.808166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.808533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.808550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.808558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.654 [2024-11-20 09:59:14.808721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.654 [2024-11-20 09:59:14.808885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.808894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.808901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.808908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.821050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.821439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.821456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.821464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.654 [2024-11-20 09:59:14.821626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.654 [2024-11-20 09:59:14.821789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.821798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.821805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.821815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.834038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.834357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.834374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.834382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.654 [2024-11-20 09:59:14.834545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.654 [2024-11-20 09:59:14.834709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.834718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.834725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.834731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.847061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.847419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.847463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.847487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.654 [2024-11-20 09:59:14.848004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.654 [2024-11-20 09:59:14.848277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.848295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.848309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.848323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.861849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.862231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.862254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.862266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.654 [2024-11-20 09:59:14.862520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.654 [2024-11-20 09:59:14.862775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.862789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.862799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.862809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.874831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.875115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.875136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.875144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.654 [2024-11-20 09:59:14.875310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.654 [2024-11-20 09:59:14.875479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.875489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.875497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.875504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.887715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.888152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.888171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.888180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.654 [2024-11-20 09:59:14.888353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.654 [2024-11-20 09:59:14.888525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.888535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.888542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.888549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.900621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.901003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.901022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.901031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.654 [2024-11-20 09:59:14.901205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.654 [2024-11-20 09:59:14.901379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.901389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.901396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.901403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.913782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.914177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.914195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.914203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.654 [2024-11-20 09:59:14.914391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.654 [2024-11-20 09:59:14.914583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.654 [2024-11-20 09:59:14.914593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.654 [2024-11-20 09:59:14.914601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.654 [2024-11-20 09:59:14.914608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.654 [2024-11-20 09:59:14.926631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.654 [2024-11-20 09:59:14.927018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.654 [2024-11-20 09:59:14.927064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.654 [2024-11-20 09:59:14.927088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.655 [2024-11-20 09:59:14.927667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.655 [2024-11-20 09:59:14.928039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.655 [2024-11-20 09:59:14.928050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.655 [2024-11-20 09:59:14.928056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.655 [2024-11-20 09:59:14.928064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.655 [2024-11-20 09:59:14.939467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.655 [2024-11-20 09:59:14.939866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.655 [2024-11-20 09:59:14.939884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.655 [2024-11-20 09:59:14.939892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.655 [2024-11-20 09:59:14.940071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.655 [2024-11-20 09:59:14.940254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.655 [2024-11-20 09:59:14.940264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.655 [2024-11-20 09:59:14.940270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.655 [2024-11-20 09:59:14.940277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.655 [2024-11-20 09:59:14.952440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.655 [2024-11-20 09:59:14.952787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.655 [2024-11-20 09:59:14.952805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.655 [2024-11-20 09:59:14.952813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.655 [2024-11-20 09:59:14.952990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.655 [2024-11-20 09:59:14.953165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.655 [2024-11-20 09:59:14.953178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.655 [2024-11-20 09:59:14.953185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.655 [2024-11-20 09:59:14.953193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.655 [2024-11-20 09:59:14.965413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.655 [2024-11-20 09:59:14.965811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.655 [2024-11-20 09:59:14.965828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.655 [2024-11-20 09:59:14.965836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.655 [2024-11-20 09:59:14.966005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.655 [2024-11-20 09:59:14.966168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.655 [2024-11-20 09:59:14.966177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.655 [2024-11-20 09:59:14.966184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.655 [2024-11-20 09:59:14.966190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.655 [2024-11-20 09:59:14.978357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.655 [2024-11-20 09:59:14.978738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.655 [2024-11-20 09:59:14.978756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.655 [2024-11-20 09:59:14.978765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.655 [2024-11-20 09:59:14.978942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.655 [2024-11-20 09:59:14.979127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.655 [2024-11-20 09:59:14.979148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.655 [2024-11-20 09:59:14.979156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.655 [2024-11-20 09:59:14.979163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.915 [2024-11-20 09:59:14.991465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.915 [2024-11-20 09:59:14.991900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-11-20 09:59:14.991918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.915 [2024-11-20 09:59:14.991927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.915 [2024-11-20 09:59:14.992109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.915 [2024-11-20 09:59:14.992288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.915 [2024-11-20 09:59:14.992298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.915 [2024-11-20 09:59:14.992305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.915 [2024-11-20 09:59:14.992313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.915 [2024-11-20 09:59:15.004657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.915 [2024-11-20 09:59:15.005057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-11-20 09:59:15.005076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.915 [2024-11-20 09:59:15.005086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.915 [2024-11-20 09:59:15.005265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.915 [2024-11-20 09:59:15.005444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.915 [2024-11-20 09:59:15.005455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.915 [2024-11-20 09:59:15.005461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.915 [2024-11-20 09:59:15.005470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.915 [2024-11-20 09:59:15.017816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.915 [2024-11-20 09:59:15.018109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-11-20 09:59:15.018128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.915 [2024-11-20 09:59:15.018136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.915 [2024-11-20 09:59:15.018313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.915 [2024-11-20 09:59:15.018492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.915 [2024-11-20 09:59:15.018503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.915 [2024-11-20 09:59:15.018512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.915 [2024-11-20 09:59:15.018519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.915 [2024-11-20 09:59:15.030851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.915 [2024-11-20 09:59:15.031267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-11-20 09:59:15.031285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.915 [2024-11-20 09:59:15.031293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.915 [2024-11-20 09:59:15.031470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.915 [2024-11-20 09:59:15.031649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.915 [2024-11-20 09:59:15.031660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.915 [2024-11-20 09:59:15.031666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.915 [2024-11-20 09:59:15.031673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3065794 Killed "${NVMF_APP[@]}" "$@" 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:51.915 [2024-11-20 09:59:15.044016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:51.915 [2024-11-20 09:59:15.044446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.915 [2024-11-20 09:59:15.044464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.915 [2024-11-20 09:59:15.044473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.915 [2024-11-20 09:59:15.044650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.915 [2024-11-20 09:59:15.044829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.915 [2024-11-20 09:59:15.044840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.915 [2024-11-20 09:59:15.044847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.915 [2024-11-20 09:59:15.044854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3067141 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3067141 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3067141 ']' 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.915 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.916 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:51.916 [2024-11-20 09:59:15.057068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.057433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.057451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.057459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.057638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.916 [2024-11-20 09:59:15.057816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.916 [2024-11-20 09:59:15.057826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.916 [2024-11-20 09:59:15.057834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.916 [2024-11-20 09:59:15.057841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.916 [2024-11-20 09:59:15.070213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.070569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.070588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.070596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.070773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.916 [2024-11-20 09:59:15.070957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.916 [2024-11-20 09:59:15.070966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.916 [2024-11-20 09:59:15.070973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.916 [2024-11-20 09:59:15.070980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.916 [2024-11-20 09:59:15.083330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.083668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.083687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.083695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.083873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.916 [2024-11-20 09:59:15.084059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.916 [2024-11-20 09:59:15.084070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.916 [2024-11-20 09:59:15.084077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.916 [2024-11-20 09:59:15.084083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.916 [2024-11-20 09:59:15.096469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.096830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.096848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.096856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.097053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.916 [2024-11-20 09:59:15.097232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.916 [2024-11-20 09:59:15.097242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.916 [2024-11-20 09:59:15.097249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.916 [2024-11-20 09:59:15.097256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.916 [2024-11-20 09:59:15.097454] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:26:51.916 [2024-11-20 09:59:15.097495] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.916 [2024-11-20 09:59:15.109567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.109857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.109874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.109882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.110158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.916 [2024-11-20 09:59:15.110335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.916 [2024-11-20 09:59:15.110345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.916 [2024-11-20 09:59:15.110353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.916 [2024-11-20 09:59:15.110360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.916 [2024-11-20 09:59:15.122744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.123042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.123061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.123069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.123246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.916 [2024-11-20 09:59:15.123425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.916 [2024-11-20 09:59:15.123434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.916 [2024-11-20 09:59:15.123441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.916 [2024-11-20 09:59:15.123449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.916 [2024-11-20 09:59:15.135943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.136320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.136338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.136346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.136524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.916 [2024-11-20 09:59:15.136703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.916 [2024-11-20 09:59:15.136713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.916 [2024-11-20 09:59:15.136720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.916 [2024-11-20 09:59:15.136727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.916 [2024-11-20 09:59:15.149077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.149511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.149529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.149545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.149723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.916 [2024-11-20 09:59:15.149902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.916 [2024-11-20 09:59:15.149912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.916 [2024-11-20 09:59:15.149919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.916 [2024-11-20 09:59:15.149925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.916 [2024-11-20 09:59:15.162267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.162617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.162636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.162646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.162824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.916 [2024-11-20 09:59:15.163010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.916 [2024-11-20 09:59:15.163021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.916 [2024-11-20 09:59:15.163030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.916 [2024-11-20 09:59:15.163038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.916 [2024-11-20 09:59:15.175366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.916 [2024-11-20 09:59:15.175795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.916 [2024-11-20 09:59:15.175813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.916 [2024-11-20 09:59:15.175822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.916 [2024-11-20 09:59:15.176004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.917 [2024-11-20 09:59:15.176184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.917 [2024-11-20 09:59:15.176195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.917 [2024-11-20 09:59:15.176203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.917 [2024-11-20 09:59:15.176211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.917 [2024-11-20 09:59:15.177072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:51.917 [2024-11-20 09:59:15.188425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.917 [2024-11-20 09:59:15.188809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-11-20 09:59:15.188830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.917 [2024-11-20 09:59:15.188838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.917 [2024-11-20 09:59:15.189028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.917 [2024-11-20 09:59:15.189208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.917 [2024-11-20 09:59:15.189218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.917 [2024-11-20 09:59:15.189225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.917 [2024-11-20 09:59:15.189233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.917 [2024-11-20 09:59:15.201631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.917 [2024-11-20 09:59:15.202004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-11-20 09:59:15.202023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.917 [2024-11-20 09:59:15.202031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.917 [2024-11-20 09:59:15.202210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.917 [2024-11-20 09:59:15.202390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.917 [2024-11-20 09:59:15.202400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.917 [2024-11-20 09:59:15.202406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.917 [2024-11-20 09:59:15.202413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.917 [2024-11-20 09:59:15.214617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.917 [2024-11-20 09:59:15.215044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-11-20 09:59:15.215063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.917 [2024-11-20 09:59:15.215072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.917 [2024-11-20 09:59:15.215250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.917 [2024-11-20 09:59:15.215430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.917 [2024-11-20 09:59:15.215440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.917 [2024-11-20 09:59:15.215448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.917 [2024-11-20 09:59:15.215455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.917 [2024-11-20 09:59:15.220312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.917 [2024-11-20 09:59:15.220339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.917 [2024-11-20 09:59:15.220346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.917 [2024-11-20 09:59:15.220352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.917 [2024-11-20 09:59:15.220357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.917 [2024-11-20 09:59:15.221627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.917 [2024-11-20 09:59:15.221736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.917 [2024-11-20 09:59:15.221738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.917 [2024-11-20 09:59:15.227790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.917 [2024-11-20 09:59:15.228253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-11-20 09:59:15.228272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.917 [2024-11-20 09:59:15.228281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.917 [2024-11-20 09:59:15.228460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.917 [2024-11-20 09:59:15.228640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.917 [2024-11-20 09:59:15.228650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.917 [2024-11-20 09:59:15.228658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.917 [2024-11-20 09:59:15.228665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:51.917 [2024-11-20 09:59:15.240845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:51.917 [2024-11-20 09:59:15.241232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.917 [2024-11-20 09:59:15.241253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:51.917 [2024-11-20 09:59:15.241262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:51.917 [2024-11-20 09:59:15.241441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:51.917 [2024-11-20 09:59:15.241621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:51.917 [2024-11-20 09:59:15.241631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:51.917 [2024-11-20 09:59:15.241638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:51.917 [2024-11-20 09:59:15.241647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.177 [2024-11-20 09:59:15.254051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.177 [2024-11-20 09:59:15.254431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.177 [2024-11-20 09:59:15.254452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.177 [2024-11-20 09:59:15.254462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.177 [2024-11-20 09:59:15.254643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.177 [2024-11-20 09:59:15.254823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.178 [2024-11-20 09:59:15.254834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.178 [2024-11-20 09:59:15.254842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.178 [2024-11-20 09:59:15.254850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.178 [2024-11-20 09:59:15.267195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.178 [2024-11-20 09:59:15.267558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.178 [2024-11-20 09:59:15.267579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.178 [2024-11-20 09:59:15.267593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.178 [2024-11-20 09:59:15.267772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.178 [2024-11-20 09:59:15.267961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.178 [2024-11-20 09:59:15.267971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.178 [2024-11-20 09:59:15.267979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.178 [2024-11-20 09:59:15.267988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.178 [2024-11-20 09:59:15.280323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.178 [2024-11-20 09:59:15.280748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.178 [2024-11-20 09:59:15.280769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.178 [2024-11-20 09:59:15.280778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.178 [2024-11-20 09:59:15.280962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.178 [2024-11-20 09:59:15.281143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.178 [2024-11-20 09:59:15.281154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.178 [2024-11-20 09:59:15.281161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.178 [2024-11-20 09:59:15.281169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.178 [2024-11-20 09:59:15.293364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.178 [2024-11-20 09:59:15.293731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.178 [2024-11-20 09:59:15.293749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.178 [2024-11-20 09:59:15.293758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.178 [2024-11-20 09:59:15.293937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.178 [2024-11-20 09:59:15.294123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.178 [2024-11-20 09:59:15.294154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.178 [2024-11-20 09:59:15.294164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.178 [2024-11-20 09:59:15.294173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.178 [2024-11-20 09:59:15.306516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.178 [2024-11-20 09:59:15.306956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.178 [2024-11-20 09:59:15.306975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.178 [2024-11-20 09:59:15.306984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.178 [2024-11-20 09:59:15.307162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.178 [2024-11-20 09:59:15.307349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.178 [2024-11-20 09:59:15.307359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.178 [2024-11-20 09:59:15.307366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.178 [2024-11-20 09:59:15.307373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.178 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:52.178 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:52.178 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:52.178 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:52.178 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.178 [2024-11-20 09:59:15.319688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.178 [2024-11-20 09:59:15.320121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.178 [2024-11-20 09:59:15.320143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.178 [2024-11-20 09:59:15.320151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.178 [2024-11-20 09:59:15.320330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.178 [2024-11-20 09:59:15.320507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.178 [2024-11-20 09:59:15.320515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.178 [2024-11-20 09:59:15.320522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.178 [2024-11-20 09:59:15.320528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.178 [2024-11-20 09:59:15.332852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.178 [2024-11-20 09:59:15.333193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.178 [2024-11-20 09:59:15.333210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.178 [2024-11-20 09:59:15.333218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.178 [2024-11-20 09:59:15.333395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.178 [2024-11-20 09:59:15.333573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.178 [2024-11-20 09:59:15.333581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.178 [2024-11-20 09:59:15.333588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.178 [2024-11-20 09:59:15.333595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.178 [2024-11-20 09:59:15.345917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.178 [2024-11-20 09:59:15.346255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.178 [2024-11-20 09:59:15.346273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.178 [2024-11-20 09:59:15.346281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.178 [2024-11-20 09:59:15.346462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.178 [2024-11-20 09:59:15.346640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.178 [2024-11-20 09:59:15.346649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.178 [2024-11-20 09:59:15.346656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.178 [2024-11-20 09:59:15.346663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.178 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.178 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.178 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.178 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.178 [2024-11-20 09:59:15.357200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.179 [2024-11-20 09:59:15.358998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.179 [2024-11-20 09:59:15.359337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.179 [2024-11-20 09:59:15.359354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.179 [2024-11-20 09:59:15.359362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.179 [2024-11-20 09:59:15.359539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.179 [2024-11-20 09:59:15.359716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.179 [2024-11-20 09:59:15.359725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.179 [2024-11-20 09:59:15.359732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.179 [2024-11-20 09:59:15.359738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.179 [2024-11-20 09:59:15.372055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.179 [2024-11-20 09:59:15.372398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.179 [2024-11-20 09:59:15.372415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.179 [2024-11-20 09:59:15.372423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.179 [2024-11-20 09:59:15.372600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.179 [2024-11-20 09:59:15.372778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.179 [2024-11-20 09:59:15.372787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.179 [2024-11-20 09:59:15.372793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.179 [2024-11-20 09:59:15.372800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.179 [2024-11-20 09:59:15.385140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.179 [2024-11-20 09:59:15.385572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.179 [2024-11-20 09:59:15.385589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.179 [2024-11-20 09:59:15.385596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.179 [2024-11-20 09:59:15.385774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.179 [2024-11-20 09:59:15.385959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.179 [2024-11-20 09:59:15.385968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.179 [2024-11-20 09:59:15.385975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.179 [2024-11-20 09:59:15.385982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.179 Malloc0 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.179 [2024-11-20 09:59:15.398331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.179 [2024-11-20 09:59:15.398786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.179 [2024-11-20 09:59:15.398804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.179 [2024-11-20 09:59:15.398812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.179 [2024-11-20 09:59:15.398994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.179 [2024-11-20 09:59:15.399172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.179 [2024-11-20 09:59:15.399180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.179 [2024-11-20 09:59:15.399187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.179 [2024-11-20 09:59:15.399193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.179 [2024-11-20 09:59:15.411526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.179 [2024-11-20 09:59:15.411964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.179 [2024-11-20 09:59:15.411982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e500 with addr=10.0.0.2, port=4420 00:26:52.179 [2024-11-20 09:59:15.411990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e500 is same with the state(6) to be set 00:26:52.179 [2024-11-20 09:59:15.412168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e500 (9): Bad file descriptor 00:26:52.179 [2024-11-20 09:59:15.412348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.179 [2024-11-20 09:59:15.412357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.179 [2024-11-20 09:59:15.412363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.179 [2024-11-20 09:59:15.412370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:52.179 [2024-11-20 09:59:15.419084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.179 09:59:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3066077 00:26:52.179 [2024-11-20 09:59:15.424718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.438 4749.83 IOPS, 18.55 MiB/s [2024-11-20T08:59:15.770Z] [2024-11-20 09:59:15.576183] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:54.312 5522.29 IOPS, 21.57 MiB/s [2024-11-20T08:59:18.582Z] 6247.12 IOPS, 24.40 MiB/s [2024-11-20T08:59:19.519Z] 6780.89 IOPS, 26.49 MiB/s [2024-11-20T08:59:20.898Z] 7216.00 IOPS, 28.19 MiB/s [2024-11-20T08:59:21.836Z] 7591.36 IOPS, 29.65 MiB/s [2024-11-20T08:59:22.773Z] 7886.25 IOPS, 30.81 MiB/s [2024-11-20T08:59:23.711Z] 8139.38 IOPS, 31.79 MiB/s [2024-11-20T08:59:24.646Z] 8358.93 IOPS, 32.65 MiB/s [2024-11-20T08:59:24.646Z] 8550.13 IOPS, 33.40 MiB/s 00:27:01.314 Latency(us) 00:27:01.314 [2024-11-20T08:59:24.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.314 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:01.314 Verification LBA range: start 0x0 length 0x4000 00:27:01.314 Nvme1n1 : 15.01 8552.01 33.41 11082.64 0.00 6498.76 448.78 23592.96 00:27:01.314 [2024-11-20T08:59:24.646Z] =================================================================================================================== 00:27:01.314 [2024-11-20T08:59:24.646Z] Total : 8552.01 33.41 11082.64 0.00 6498.76 448.78 23592.96 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:01.574 rmmod nvme_tcp 00:27:01.574 rmmod nvme_fabrics 00:27:01.574 rmmod nvme_keyring 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3067141 ']' 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3067141 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3067141 ']' 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3067141 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3067141 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3067141' 00:27:01.574 killing process with pid 3067141 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3067141 00:27:01.574 09:59:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3067141 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.834 09:59:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.372 09:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:04.372 00:27:04.372 real 0m26.219s 00:27:04.372 user 1m1.254s 00:27:04.372 sys 0m6.752s 00:27:04.372 09:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:04.372 09:59:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.372 ************************************ 00:27:04.372 END TEST nvmf_bdevperf 00:27:04.372 ************************************ 00:27:04.372 09:59:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:04.372 09:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:04.372 09:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:04.372 09:59:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.372 ************************************ 00:27:04.372 START TEST nvmf_target_disconnect 00:27:04.372 ************************************ 00:27:04.372 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:04.372 * Looking for test storage... 00:27:04.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.372 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # lcov --version 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:27:04.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.373 --rc genhtml_branch_coverage=1 00:27:04.373 --rc genhtml_function_coverage=1 00:27:04.373 --rc genhtml_legend=1 00:27:04.373 --rc geninfo_all_blocks=1 00:27:04.373 --rc geninfo_unexecuted_blocks=1 00:27:04.373 00:27:04.373 ' 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:27:04.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.373 --rc genhtml_branch_coverage=1 00:27:04.373 --rc genhtml_function_coverage=1 00:27:04.373 --rc genhtml_legend=1 00:27:04.373 --rc geninfo_all_blocks=1 00:27:04.373 --rc geninfo_unexecuted_blocks=1 00:27:04.373 00:27:04.373 ' 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:27:04.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.373 --rc genhtml_branch_coverage=1 00:27:04.373 --rc genhtml_function_coverage=1 00:27:04.373 --rc genhtml_legend=1 00:27:04.373 --rc geninfo_all_blocks=1 00:27:04.373 --rc geninfo_unexecuted_blocks=1 00:27:04.373 00:27:04.373 ' 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:27:04.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.373 --rc genhtml_branch_coverage=1 00:27:04.373 --rc genhtml_function_coverage=1 00:27:04.373 --rc genhtml_legend=1 00:27:04.373 --rc geninfo_all_blocks=1 00:27:04.373 --rc geninfo_unexecuted_blocks=1 00:27:04.373 00:27:04.373 ' 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:04.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:04.373 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:04.374 09:59:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:09.650 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:09.650 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:09.650 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:09.651 Found net devices under 0000:86:00.0: cvl_0_0 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:09.651 Found net devices under 0000:86:00.1: cvl_0_1 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.651 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:09.910 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:09.911 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.911 09:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:27:09.911 00:27:09.911 --- 10.0.0.2 ping statistics --- 00:27:09.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.911 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:27:09.911 00:27:09.911 --- 10.0.0.1 ping statistics --- 00:27:09.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.911 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.911 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.171 ************************************ 00:27:10.171 START TEST nvmf_target_disconnect_tc1 00:27:10.171 ************************************ 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:10.171 [2024-11-20 09:59:33.403571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:10.171 [2024-11-20 09:59:33.403684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e5ab0 with addr=10.0.0.2, port=4420 00:27:10.171 [2024-11-20 09:59:33.403728] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:10.171 [2024-11-20 09:59:33.403753] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:10.171 [2024-11-20 09:59:33.403772] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:10.171 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:10.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:10.171 Initializing NVMe Controllers 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.171 00:27:10.171 real 0m0.119s 00:27:10.171 user 0m0.057s 00:27:10.171 sys 0m0.062s 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:10.171 ************************************ 00:27:10.171 END TEST nvmf_target_disconnect_tc1 00:27:10.171 ************************************ 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:10.171 ************************************ 00:27:10.171 START TEST nvmf_target_disconnect_tc2 00:27:10.171 ************************************ 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3072143 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3072143 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3072143 ']' 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.171 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.431 [2024-11-20 09:59:33.546237] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:27:10.431 [2024-11-20 09:59:33.546289] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.431 [2024-11-20 09:59:33.626418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:10.431 [2024-11-20 09:59:33.669813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.431 [2024-11-20 09:59:33.669850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.431 [2024-11-20 09:59:33.669858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.431 [2024-11-20 09:59:33.669864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.431 [2024-11-20 09:59:33.669870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.431 [2024-11-20 09:59:33.671328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:10.431 [2024-11-20 09:59:33.671436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:10.431 [2024-11-20 09:59:33.671545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:10.431 [2024-11-20 09:59:33.671545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.690 Malloc0 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.690 [2024-11-20 09:59:33.852450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.690 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.691 [2024-11-20 09:59:33.884723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3072352 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:10.691 09:59:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:12.595 09:59:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3072143 00:27:12.595 09:59:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 [2024-11-20 09:59:35.912823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 [2024-11-20 09:59:35.913029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Write completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.595 Read completed with error (sct=0, sc=8) 00:27:12.595 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 [2024-11-20 09:59:35.913226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Read completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 Write completed with error (sct=0, sc=8) 00:27:12.596 starting I/O failed 00:27:12.596 [2024-11-20 09:59:35.913417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:12.596 [2024-11-20 09:59:35.913690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.913715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.913877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.913888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.914034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.914046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.914145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.914154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.914351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.914362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.914448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.914461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.914753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.914764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.914861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.914870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.915048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.915059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.915228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.915239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.915464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.915495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.915618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.915650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.915791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.915821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.916091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.916122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.916360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.916391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.916600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.916630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.916891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.916922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.596 qpair failed and we were unable to recover it. 00:27:12.596 [2024-11-20 09:59:35.917121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.596 [2024-11-20 09:59:35.917152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.917407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.917439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.917764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.917789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.917969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.917995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.918102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.918126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.918333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.918365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.918560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.918591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.918848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.918879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.919032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.919063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.919236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.919267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.919459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.919482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.919669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.919693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.919876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.919908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.920100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.920132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.920326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.920357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.920678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.920729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.920991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.921021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.921220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.921247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.921439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.921466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.921663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.921695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.921964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.921999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.922238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.922283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.922452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.922477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.922714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.922740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.922998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.923031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.923172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.923202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.923457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.923488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.923682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.923708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.597 [2024-11-20 09:59:35.923895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.597 [2024-11-20 09:59:35.923933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.597 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.924119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.924151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.924394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.924424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.924679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.924710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.924838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.924868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.925049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.925080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.925318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.925347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.925592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.925616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.925863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.925886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.926167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.926192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.926316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.926340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.926462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.926485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.876 qpair failed and we were unable to recover it. 00:27:12.876 [2024-11-20 09:59:35.926602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.876 [2024-11-20 09:59:35.926625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.926873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.926897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.927210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.927241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.927505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.927534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.927815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.927846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.928109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.928141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.928407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.928436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.928682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.928713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.928963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.928994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.929208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.929238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.929360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.929390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.929579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.929608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.929871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.929902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.930186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.930219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.930495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.930528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.930783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.930814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.930999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.931032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.931267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.931299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.931536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.931569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.931748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.931780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.932084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.932118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.932254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.932286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.932466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.932497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.932735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.932767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.932990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.933025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.933195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.933227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.933417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.933447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.933732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.933763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.933974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.934013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.934248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.934280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.934406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.934436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.934681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.877 [2024-11-20 09:59:35.934713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.877 qpair failed and we were unable to recover it. 00:27:12.877 [2024-11-20 09:59:35.935025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.935058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.935297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.935328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.935460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.935492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.935666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.935697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.935880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.935911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.936116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.936150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.936387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.936419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.936696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.936728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.936904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.936936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.937131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.937163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.937426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.937459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.937687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.937717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.937979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.938014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.938212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.938244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.938495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.938528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.938786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.938818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.939099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.939132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.939314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.939346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.939607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.939638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.939923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.939962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.940192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.940225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.940476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.940508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.940798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.940830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.941099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.941133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.941322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.941353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.941551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.941583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.941794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.941825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.942010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.942044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.942309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.942340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.942618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.942650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.942933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.942979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.943219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.943251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.943506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.943538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.943722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.943754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.943968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.944002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.944261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.944293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.944478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.944516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.878 [2024-11-20 09:59:35.944688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.878 [2024-11-20 09:59:35.944720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.878 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.944886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.944917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.945214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.945248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.945513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.945544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.945732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.945763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.945975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.946007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.946216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.946248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.946486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.946517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.946785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.946816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.946992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.947024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.947289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.947320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.947532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.947564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.947757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.947788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.948033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.948067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.948253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.948285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.948494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.948525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.948721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.948753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.948990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.949022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.949288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.949319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.949608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.949640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.949883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.949915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.950224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.950258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.950524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.950555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.950800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.950832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.950999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.951033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.951294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.951326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.951508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.951540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.951783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.951814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.952130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.952162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.952361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.952393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.952656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.952687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.952972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.953004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.953279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.953311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.879 [2024-11-20 09:59:35.953598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.879 [2024-11-20 09:59:35.953630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.879 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.953832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.953863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.954051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.954084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.954356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.954389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.954672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.954703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.954983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.955016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.955206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.955243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.955496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.955527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.955701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.955732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.955974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.956006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.956189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.956221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.956340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.956370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.956542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.956572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.956761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.956792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.957073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.957106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.957372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.957404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.957587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.957618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.957884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.957916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.958164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.958197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.958384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.958415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.958694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.958727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.958995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.959031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.959316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.959348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.959630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.959661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.959937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.959978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.960251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.960283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.960584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.960615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.960878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.960910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.961114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.961148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.961394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.961426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.961608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.961639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.961924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.961964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.962147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.962178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.962370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.962403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.962640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.962671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.962935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.962978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.963204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.963235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.880 [2024-11-20 09:59:35.963423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.880 [2024-11-20 09:59:35.963454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.880 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.963700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.963732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.963920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.963961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.964158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.964189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.964361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.964391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.964679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.964710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.964962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.964996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.965253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.965285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.965574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.965606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.965859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.965896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.966196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.966229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.966503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.966534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.966706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.966738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.966977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.967010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.967192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.967224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.967512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.967544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.967822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.967854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.968090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.968123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.968389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.968421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.968611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.968643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.968902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.968934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.969220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.969253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.969503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.969535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.969790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.969822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.970110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.970143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.970404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.970436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.970733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.970765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.971031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.971064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.971356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.971388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.971655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.971687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.971876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.971907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.972104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.972138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.881 qpair failed and we were unable to recover it. 00:27:12.881 [2024-11-20 09:59:35.972404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.881 [2024-11-20 09:59:35.972435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.972564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.972595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.972769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.972801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.972903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.972935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.973221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.973256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.973447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.973479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.973737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.973769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.974011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.974046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.974230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.974263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.974445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.974477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.974742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.974774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.974897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.974928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.975128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.975161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.975401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.975433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.975628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.975661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.975844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.975876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.976115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.976147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.976387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.976418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.976558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.976588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.976849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.976882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.977152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.977185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.977494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.977527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.977709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.977742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.977990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.978023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.978234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.978266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.978525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.978556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.978815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.978847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.979045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.979080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.979362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.979395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.979600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.979632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.979906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.979938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.980233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.980266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.980492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.980525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.980793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.980825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.980964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.980997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.981185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.981217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.981328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.981359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.981601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.882 [2024-11-20 09:59:35.981632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.882 qpair failed and we were unable to recover it. 00:27:12.882 [2024-11-20 09:59:35.981865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.981896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.982176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.982209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.982477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.982507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.982735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.982767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.982984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.983018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.983205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.983236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.983491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.983529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.983745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.983777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.984020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.984053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.984317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.984349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.984641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.984673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.984922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.984961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.985135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.985167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.985434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.985466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.985752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.985784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.986064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.986097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.986308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.986340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.986588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.986619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.986823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.986854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.987047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.987082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.987350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.987382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.987580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.987613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.987895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.987927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.988201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.988235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.988519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.988551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.988806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.988838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.989081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.989115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.989405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.989437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.989621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.989654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.989826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.989857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.990046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.990098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.990358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.990390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.990566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.990597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.990818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.990850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.991064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.991097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.991221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.991252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.991463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.991494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.991623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.883 [2024-11-20 09:59:35.991654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.883 qpair failed and we were unable to recover it. 00:27:12.883 [2024-11-20 09:59:35.991911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.991943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.992242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.992274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.992579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.992612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.992807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.992839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.993104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.993137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.993345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.993378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.993624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.993655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.993860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.993892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.994185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.994223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.994488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.994521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.994792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.994823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.995054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.995089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.995274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.995306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.995492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.995524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.995765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.995797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.996049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.996082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.996302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.996333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.996527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.996559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.996816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.996850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.997128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.997162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.997455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.997488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.997754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.997787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.998062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.998096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.998406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.998438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.998731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.998763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.999038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.999072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.999283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.999314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.999523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.999555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:35.999824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:35.999856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:36.000099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:36.000132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:36.000401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:36.000432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:36.000725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:36.000757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:36.001047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:36.001081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:36.001383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:36.001415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:36.001674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:36.001706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:36.001887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:36.001919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:36.002172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:36.002204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.884 [2024-11-20 09:59:36.002399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.884 [2024-11-20 09:59:36.002431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.884 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.002693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.002726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.002998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.003033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.003234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.003266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.003530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.003562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.003855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.003888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.004164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.004198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.004465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.004497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.004792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.004825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.005098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.005132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.005440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.005471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.005725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.005763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.006068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.006101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.006320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.006351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.006595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.006627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.006802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.006834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.007079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.007112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.007381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.007413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.007602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.007635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.007837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.007868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.008091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.008124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.008322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.008356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.008654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.008685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.008962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.008995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.009242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.009275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.009524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.009556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.009741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.009773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.009970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.010004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.010271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.010304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.010423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.010455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.010724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.010756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.010998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.011032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.011308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.011343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.011598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.011630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.011937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.011979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.012232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.012266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.012472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.012504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.012774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.012806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.885 [2024-11-20 09:59:36.013069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.885 [2024-11-20 09:59:36.013104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.885 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.013354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.013386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.013713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.013745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.014015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.014047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.014246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.014278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.014533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.014565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.014756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.014789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.014992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.015027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.015302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.015334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.015519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.015551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.015819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.015851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.016143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.016176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.016387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.016418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.016690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.016729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.017010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.017044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.017225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.017256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.017526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.017559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.017807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.017839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.018099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.018132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.018319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.018351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.018621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.018653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.018929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.018989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.019267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.019299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.019582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.019615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.019894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.019926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.020223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.020256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.020526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.020558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.020815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.020847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.020994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.021028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.021298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.021330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.021596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.021628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.021818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.021851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.022116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.022149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.886 [2024-11-20 09:59:36.022328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.886 [2024-11-20 09:59:36.022360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.886 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.022560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.022592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.022807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.022839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.023033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.023066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.023245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.023278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.023550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.023581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.023872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.023904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.024180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.024214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.024404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.024435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.024684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.024716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.024971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.025006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.025283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.025316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.025590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.025621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.025958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.025991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.026271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.026304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.026574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.026606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.026833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.026864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.027044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.027079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.027345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.027377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.027556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.027589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.027864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.027903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.028106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.028140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.028318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.028351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.028623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.028655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.028930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.028970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.029226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.029259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.029557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.029588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.029858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.029890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.030190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.030224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.030495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.030528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.030805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.030837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.031120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.031154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.031333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.031366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.031636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.031668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.031977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.032012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.032215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.032249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.032527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.032560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.032760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.032792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.887 qpair failed and we were unable to recover it. 00:27:12.887 [2024-11-20 09:59:36.033044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.887 [2024-11-20 09:59:36.033078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.033277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.033309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.033582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.033614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.033897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.033930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.034137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.034168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.034439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.034471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.034675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.034708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.034900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.034931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.035216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.035248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.035509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.035542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.035752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.035783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.035987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.036022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.036224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.036256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.036505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.036537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.036789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.036820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.037045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.037079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.037352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.037384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.037670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.037701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.037989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.038023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.038225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.038257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.038386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.038419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.038729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.038762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.039039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.039079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.039261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.039293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.039568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.039600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.039879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.039911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.040200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.040234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.040508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.040541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.040833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.040864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.040984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.041019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.041211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.041245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.041516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.041547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.041795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.041828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.042101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.042136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.042345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.042378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.042674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.042705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.043002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.043036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.888 [2024-11-20 09:59:36.043305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.888 [2024-11-20 09:59:36.043337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.888 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.043613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.043646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.043760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.043792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.044066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.044101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.044381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.044414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.044690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.044722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.044964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.044998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.045269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.045302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.045516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.045548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.045746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.045778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.046034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.046070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.046297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.046329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.046559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.046592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.046793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.046826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.047082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.047115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.047407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.047440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.047692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.047724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.047936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.047977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.048169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.048202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.048402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.048435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.048718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.048749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.048955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.048989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.049240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.049274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.049523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.049555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.049855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.049888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.050156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.050195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.050452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.050485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.050712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.050745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.050939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.050982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.051117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.051150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.051408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.051440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.051629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.051661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.051862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.051895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.052115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.052149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.052362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.052396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.052595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.052628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.052829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.052860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.053112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.053148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.889 qpair failed and we were unable to recover it. 00:27:12.889 [2024-11-20 09:59:36.053450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.889 [2024-11-20 09:59:36.053482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.053751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.053784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.053976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.054010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.054282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.054315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.054507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.054539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.054809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.054842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.055062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.055096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.055347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.055380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.055649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.055681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.055882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.055915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.056201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.056234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.056484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.056516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.056787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.056820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.057030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.057064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.057321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.057355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.057652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.057683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.057962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.057996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.058193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.058226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.058495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.058528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.058794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.058826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.059122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.059156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.059369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.059402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.059675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.059707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.059983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.060018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.060304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.060337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.060596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.060629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.060880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.060911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.061179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.061219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.061436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.061469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.061663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.061695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.061901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.061935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.062248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.062281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.062552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.062584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.890 qpair failed and we were unable to recover it. 00:27:12.890 [2024-11-20 09:59:36.062834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.890 [2024-11-20 09:59:36.062867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.063049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.063084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.063293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.063326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.063508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.063541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.063747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.063780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.064054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.064089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.064371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.064404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.064702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.064735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.064891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.064925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.065142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.065175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.065445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.065477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.065680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.065713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.065935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.065976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.066169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.066203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.066501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.066535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.066723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.066755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.067061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.067095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.067356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.067389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.067571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.067602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.067804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.067836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.068111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.068145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.068285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.068319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.068533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.068565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.068743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.068775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.068969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.069002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.069285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.069318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.069455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.069485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.069698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.069729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.070007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.070038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.070326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.070360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.070640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.070670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.070970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.071005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.071272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.071305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.071494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.071526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.071801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.071838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.072043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.072076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.072257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.072290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.072473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.891 [2024-11-20 09:59:36.072504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.891 qpair failed and we were unable to recover it. 00:27:12.891 [2024-11-20 09:59:36.072684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.072715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.073004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.073038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.073302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.073333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.073587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.073619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.073834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.073866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.074108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.074143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.074321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.074353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.074543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.074576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.074827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.074860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.074996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.075028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.075317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.075349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.075645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.075678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.075876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.075907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.076096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.076130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.076314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.076346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.076597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.076630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.076806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.076838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.077113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.077148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.077400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.077432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.077732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.077764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.077978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.078011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.078212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.078244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.078447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.078479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.078619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.078652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.078842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.078873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.079136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.079169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.079365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.079398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.079667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.079700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.080003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.080036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.080301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.080333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.080528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.080559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.080811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.080843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.081037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.081072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.081345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.081377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.081577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.081609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.081881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.081913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.082173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.082212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.082510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.892 [2024-11-20 09:59:36.082542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.892 qpair failed and we were unable to recover it. 00:27:12.892 [2024-11-20 09:59:36.082810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.082842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.083129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.083163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.083443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.083475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.083727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.083758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.083944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.083985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.084239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.084273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.084549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.084581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.084839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.084871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.085132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.085165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.085463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.085495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.085763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.085794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.086093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.086126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.086397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.086430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.086626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.086658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.086933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.086977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.087261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.087293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.087593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.087625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.087896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.087929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.088142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.088174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.088447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.088478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.088733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.088765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.088971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.089005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.089268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.089301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.089502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.089535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.089741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.089773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.090094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.090129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.090400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.090432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.090663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.090695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.090890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.090923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.091209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.091241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.091425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.091457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.091744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.091775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.092005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.092037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.092237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.092270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.092395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.092426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.092554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.092585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.092781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.092813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.093067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.893 [2024-11-20 09:59:36.093101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.893 qpair failed and we were unable to recover it. 00:27:12.893 [2024-11-20 09:59:36.093397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.093435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.093704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.093736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.094027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.094085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.094377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.094410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.094675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.094708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.094964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.094999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.095190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.095223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.095447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.095479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.095716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.095747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.096000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.096034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.096344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.096377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.096633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.096665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.096890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.096922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.097210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.097244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.097387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.097419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.097690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.097723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.097987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.098022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.098230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.098263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.098570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.098603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.098859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.098892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.099203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.099238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.099440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.099471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.099718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.099749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.099968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.100003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.100264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.100296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.100507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.100539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.100718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.100750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.101034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.101068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.101283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.101315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.101562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.101594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.101809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.101840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.102111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.102146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.102434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.102466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.102741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.102773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.103068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.103102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.103309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.103341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.103599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.103631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.894 [2024-11-20 09:59:36.103839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.894 [2024-11-20 09:59:36.103870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.894 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.104061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.104095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.104397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.104430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.104721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.104759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.105026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.105061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.105311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.105343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.105589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.105622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.105914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.105945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.106266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.106299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.106571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.106603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.106787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.106820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.107013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.107046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.107320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.107351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.107598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.107630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.107904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.107936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.108170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.108201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.108392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.108424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.108684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.108715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.108989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.109023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.109221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.109254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.109527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.109558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.109782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.109814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.110090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.110124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.110407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.110439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.110638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.110669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.110942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.110987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.111259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.111291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.111572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.111604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.111892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.111924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.112205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.112237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.112446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.112478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.112737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.112769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.113093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.895 [2024-11-20 09:59:36.113126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.895 qpair failed and we were unable to recover it. 00:27:12.895 [2024-11-20 09:59:36.113308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.113340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.113620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.113653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.113836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.113868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.114089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.114123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.114389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.114421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.114634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.114665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.114939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.114981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.115206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.115238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.115428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.115459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.115593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.115625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.115819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.115850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.116084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.116116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.116413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.116445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.116627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.116660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.116914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.116956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.117107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.117138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.117390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.117422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.117625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.117656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.117855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.117886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.118096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.118128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.118330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.118361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.118561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.118593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.118847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.118879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.119111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.119145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.119436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.119468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.119662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.119694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.119944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.119986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.120282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.120315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.120594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.120625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.120906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.120939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.121143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.121176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.121369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.121400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.121666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.121698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.121916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.121964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.122144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.122176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.122439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.122470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.122745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.122777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.122971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.896 [2024-11-20 09:59:36.123011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.896 qpair failed and we were unable to recover it. 00:27:12.896 [2024-11-20 09:59:36.123312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.123345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.123575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.123607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.123798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.123830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.124013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.124046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.124245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.124276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.124573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.124604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.124864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.124897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.125122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.125156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.125357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.125389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.125617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.125650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.125957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.125991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.126188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.126220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.126478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.126511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.126806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.126838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.127110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.127143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.127431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.127464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.127742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.127775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.128000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.128035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.128233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.128265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.128567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.128599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.128883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.128914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.129200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.129233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.129512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.129543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.129826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.129859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.130117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.130152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.130449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.130482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.130752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.130784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.131025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.131058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.131286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.131319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.131565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.131597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.131866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.131897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.132173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.132209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.132499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.132532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.132743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.132775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.133052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.133087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.133302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.133334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.133533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.133565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.897 [2024-11-20 09:59:36.133674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.897 [2024-11-20 09:59:36.133705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.897 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.133913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.133945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.134235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.134275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.134547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.134579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.134797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.134828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.135108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.135143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.135428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.135461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.135659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.135691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.135849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.135881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.136146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.136179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.136477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.136510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.136804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.136837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.137132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.137166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.137456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.137489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.137688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.137721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.137999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.138034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.138319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.138352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.138605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.138637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.138892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.138926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.139240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.139274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.139525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.139560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.139871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.139904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.140157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.140191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.140421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.140454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.140725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.140757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.141035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.141070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.141275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.141493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.141525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.141663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.141695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.141991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.142026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.142312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.142345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.142554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.142586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.142786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.142819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.143073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.143107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.143378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.143413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.143691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.143723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.144029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.144063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.144372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.144405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.898 [2024-11-20 09:59:36.144598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.898 [2024-11-20 09:59:36.144631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.898 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.144890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.144922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.145220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.145254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.145403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.145435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.145735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.145778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.146046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.146080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.146209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.146242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.146518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.146549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.146797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.146830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.147103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.147137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.147429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.147461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.147735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.147767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.147966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.147998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.148278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.148309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.148498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.148529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.148794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.148825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.149028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.149061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.149243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.149273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.149498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.149531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.149802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.149834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.150046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.150080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.150358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.150390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.150677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.150708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.150916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.150957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.151157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.151188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.151403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.151435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.151731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.151763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.152039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.152073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.152351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.152384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.152588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.152619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.152916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.152956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.153234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.153267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.153542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.153574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.153863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.153895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.899 qpair failed and we were unable to recover it. 00:27:12.899 [2024-11-20 09:59:36.154177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.899 [2024-11-20 09:59:36.154210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.154487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.154519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.154803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.154836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.155118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.155152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.155362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.155394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.155644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.155676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.155934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.155976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.156211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.156244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.156522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.156555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.156732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.156763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.157039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.157078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.157301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.157333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.157579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.157610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.157822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.157855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.158157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.158191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.158415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.158448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.158708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.158740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.159001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.159035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.159187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.159220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.159432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.159464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.159736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.159768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.159998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.160030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.160304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.160337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.160627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.160661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.160934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.160977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.161176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.161208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.161427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.161459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.161756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.161789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.162001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.162036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.162331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.162363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.162633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.162666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.162941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.162986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.163264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.163296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.163534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.163567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.163870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.163905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.164188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.164223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.164475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.164507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.900 qpair failed and we were unable to recover it. 00:27:12.900 [2024-11-20 09:59:36.164817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.900 [2024-11-20 09:59:36.164850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.165129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.165164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.165449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.165484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.165785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.165819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.166083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.166118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.166340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.166373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.166652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.166686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.166882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.166914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.167067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.167100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.167306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.167338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.167625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.167658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.167850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.167882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.168149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.168184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.168447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.168487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.168739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.168772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.168995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.169030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.169283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.169318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.169506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.169539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.169741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.169774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.170052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.170086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.170391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.170425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.170623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.170655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.170846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.170879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.171165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.171199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.171478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.171510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.171691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.171726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.172001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.172034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.172320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.172353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.172565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.172600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.172832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.172866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.173095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.173130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.173333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.173367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.173631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.173664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.173940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.174000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.174270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.174303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.174515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.174547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.174703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.174734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.174928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.901 [2024-11-20 09:59:36.174974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.901 qpair failed and we were unable to recover it. 00:27:12.901 [2024-11-20 09:59:36.175177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.175210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.175480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.175513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.175708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.175740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.176002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.176038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.176230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.176262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.176412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.176446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.176664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.176696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.176973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.177010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.177294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.177327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.177508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.177540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.177807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.177840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.178147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.178182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.178384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.178417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.178637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.178670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.178973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.179008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.179208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.179246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.179562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.179597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.179847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.179881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.180159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.180194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.180379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.180412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.180532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.180563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.180839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.180873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.181074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.181109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.181304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.181338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.181616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.181651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.181836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.181867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.182017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.182050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.182333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.182367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.182552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.182585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.182799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.182831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.183144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.183177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.183460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.183495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.183772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.183804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.184087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.184124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.184404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.184437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.184716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.184749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.184862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.184895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.185109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.902 [2024-11-20 09:59:36.185142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.902 qpair failed and we were unable to recover it. 00:27:12.902 [2024-11-20 09:59:36.185424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.903 [2024-11-20 09:59:36.185457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.903 qpair failed and we were unable to recover it. 00:27:12.903 [2024-11-20 09:59:36.185758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.903 [2024-11-20 09:59:36.185792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.903 qpair failed and we were unable to recover it. 00:27:12.903 [2024-11-20 09:59:36.186058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.903 [2024-11-20 09:59:36.186093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.903 qpair failed and we were unable to recover it. 00:27:12.903 [2024-11-20 09:59:36.186345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.903 [2024-11-20 09:59:36.186378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.903 qpair failed and we were unable to recover it. 00:27:12.903 [2024-11-20 09:59:36.186508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.903 [2024-11-20 09:59:36.186540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.903 qpair failed and we were unable to recover it. 00:27:12.903 [2024-11-20 09:59:36.186804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.903 [2024-11-20 09:59:36.186837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.903 qpair failed and we were unable to recover it. 00:27:12.903 [2024-11-20 09:59:36.187131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:12.903 [2024-11-20 09:59:36.187163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:12.903 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.187437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.187472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.187673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.187707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.187992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.188029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.188169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.188201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.188417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.188449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.188631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.188663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.188878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.188911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.189103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.189137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.189340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.189371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.189624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.189657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.189869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.189907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.190190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.190226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.190432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.190464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.190651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.190685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.190984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.191019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.191215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.191247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.191447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.189 [2024-11-20 09:59:36.191479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.189 qpair failed and we were unable to recover it. 00:27:13.189 [2024-11-20 09:59:36.191754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.191786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.191980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.192015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.192322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.192356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.192547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.192580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.192835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.192867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.193068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.193102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.193374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.193409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.193616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.193648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.193831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.193865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.193999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.194032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.194254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.194286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.194420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.194451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.194590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.194622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.194891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.194926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.195227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.195260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.195387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.195419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.195623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.195656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.195860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.195892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.196158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.196194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.196341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.196375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.196641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.196674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.196971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.197007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.197195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.197227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.197503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.197535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.197739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.197771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.198001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.198038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.198267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.198300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.198480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.198513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.198779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.198810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.198994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.199028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.199307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.199339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.199606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.199637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.199767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.199801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.200079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.200128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.200331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.200365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.200616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.200651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.200850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.200883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.190 [2024-11-20 09:59:36.200998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.190 [2024-11-20 09:59:36.201031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.190 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.201223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.201256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.201477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.201509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.201712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.201745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.202029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.202063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.202343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.202374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.202574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.202606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.202800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.202831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.203052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.203087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.203271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.203303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.203512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.203544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.203767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.203800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.204105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.204140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.204322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.204355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.204570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.204602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.204783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.204814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.205007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.205039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.205245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.205278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.205553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.205584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.205873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.205907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.206120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.206153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.206283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.206316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.206567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.206602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.206900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.206935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.207129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.207161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.207343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.207376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.207597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.207630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.207902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.207935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.208175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.208210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.208414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.208448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.208696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.208728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.209004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.209039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.209266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.209299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.209571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.209605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.209825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.209857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.210038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.210072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.210261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.210299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.210576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.210611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.191 [2024-11-20 09:59:36.210855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.191 [2024-11-20 09:59:36.210886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.191 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.211156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.211190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.211414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.211446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.211717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.211749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.212021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.212056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.212338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.212369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.212628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.212661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.212773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.212805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.213015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.213048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.213259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.213293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.213520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.213553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.213775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.213806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.213996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.214031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.214163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.214193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.214403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.214436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.214719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.214753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.215050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.215084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.215316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.215349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.215610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.215642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.215828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.215861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.216114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.216148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.216451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.216483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.216753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.216785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.217065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.217100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.217383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.217415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.217720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.217753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.218024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.218058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.218205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.218237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.218474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.218507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.218759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.218792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.219067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.219101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.219296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.219329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.219606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.219638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.219851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.219883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.220082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.220117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.220366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.220398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.220591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.220624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.220807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.192 [2024-11-20 09:59:36.220841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.192 qpair failed and we were unable to recover it. 00:27:13.192 [2024-11-20 09:59:36.221090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.221130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.221348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.221381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.221639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.221672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.221855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.221887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.222146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.222181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.222447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.222479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.222700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.222733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.222873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.222906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.223187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.223220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.223524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.223557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.223821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.223853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.224151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.224185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.224337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.224371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.224494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.224527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.224718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.224750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.225019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.225053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.225304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.225339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.225554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.225586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.225830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.225863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.226115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.226150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.226430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.226463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.226739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.226771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.227060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.227096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.227374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.227407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.227692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.227727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.227908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.227942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.228175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.228209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.228513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.228547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.228750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.228783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.229062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.229097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.229304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.229336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.229537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.229571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.229866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.229897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.230171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.230208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.230461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.230494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.230746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.230778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.231084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.231118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.231322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.193 [2024-11-20 09:59:36.231355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.193 qpair failed and we were unable to recover it. 00:27:13.193 [2024-11-20 09:59:36.231637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.231669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.231944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.232000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.232208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.232247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.232453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.232487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.232799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.232830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.233026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.233059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.233342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.233375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.233580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.233610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.233810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.233843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.234028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.234062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.234246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.234281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.234556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.234589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.234840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.234872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.235075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.235109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.235228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.235261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.235374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.235407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.235637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.235670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.235818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.235851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.236022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.236055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.236242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.236276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.236573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.236606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.236870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.236903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.237159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.237192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.237491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.237522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.237823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.237856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.238071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.238105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.238332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.238366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.238561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.238594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.238848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.238881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.239169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.239203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.239483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.239515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.194 qpair failed and we were unable to recover it. 00:27:13.194 [2024-11-20 09:59:36.239830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.194 [2024-11-20 09:59:36.239861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.240070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.240104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.240286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.240318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.240590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.240621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.240899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.240930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.241238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.241272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.241533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.241564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.241873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.241906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.242197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.242230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.242418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.242450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.242714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.242746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.243049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.243089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.243360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.243393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.243645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.243679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.243932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.243975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.244276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.244309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.244608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.244639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.244831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.244864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.245118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.245152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.245430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.245465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.245740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.245775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.245988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.246022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.246162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.246195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.246379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.246413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.246686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.246719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.246927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.246972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.247226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.247258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.247366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.247399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.247697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.247729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.247980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.248014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.248288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.248321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.248596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.248630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.248884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.248918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.249228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.249263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.249471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.249505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.249630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.249664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.249874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.249906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.250128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.250162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.195 qpair failed and we were unable to recover it. 00:27:13.195 [2024-11-20 09:59:36.250397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.195 [2024-11-20 09:59:36.250429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.250704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.250737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.250867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.250898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.251206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.251240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.251424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.251455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.251701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.251734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.252014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.252048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.252228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.252260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.252532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.252564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.252765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.252798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.253050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.253085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.253213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.253245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.253494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.253528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.253781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.253815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.254057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.254090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.254386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.254419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.254687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.254719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.254973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.255007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.255307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.255341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.255604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.255635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.255862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.255895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.256087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.256123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.256305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.256338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.256611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.256642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.256761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.256793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.256937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.256982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.257166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.257197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.257546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.257579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.257830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.257863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.258180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.258216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.258493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.258525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.258707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.258738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.258942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.258989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.259191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.259224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.259474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.259505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.259694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.259727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.196 [2024-11-20 09:59:36.259921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.196 [2024-11-20 09:59:36.259963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.196 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.260241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.260277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.260574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.260609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.260875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.260907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.261131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.261171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.261357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.261389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.261526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.261557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.261689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.261722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.261979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.262014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.262298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.262331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.262621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.262654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.262933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.262977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.263113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.263144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.263337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.263369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.263638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.263671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.263854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.263885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.264162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.264196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.264318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.264350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.264572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.264606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.264878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.264911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.265107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.265139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.265402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.265436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.265615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.265649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.265927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.265983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.266189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.266224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.266509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.266542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.266667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.266698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.266966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.267002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.267254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.267287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.267502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.267534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.267721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.267755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.268001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.268037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.268318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.268350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.268541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.268574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.268770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.197 [2024-11-20 09:59:36.268802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.197 qpair failed and we were unable to recover it. 00:27:13.197 [2024-11-20 09:59:36.268989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.269021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.269213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.269245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.269427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.269458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.269658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.269689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.269992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.270027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.270243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.270275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.270547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.270582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.270863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.270896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.271178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.271211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.271337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.271376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.271581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.271614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.271828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.271861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.272011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.272045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.272300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.272331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.272584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.272617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.272800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.272832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.273112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.273144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.273347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.273381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.273573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.273606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.273831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.273863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.274054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.274086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.274345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.274378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.274675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.274708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.274998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.275035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.275308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.275341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.275636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.275669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.275899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.275933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.276214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.276249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.276443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.276475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.276694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.276728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.276879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.276913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.277210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.277243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.198 [2024-11-20 09:59:36.277509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.198 [2024-11-20 09:59:36.277542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.198 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.277744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.277778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.278052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.278087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.278337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.278371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.278573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.278605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.278744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.278776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.278995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.279031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.279287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.279321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.279510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.279544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.279817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.279851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.280133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.280167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.280398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.280430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.280617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.280650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.280909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.280940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.281203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.281237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.281433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.281466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.281658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.281691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.281970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.282012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.282232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.282265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.282484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.282517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.282738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.282771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.283043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.283077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.283221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.283255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.283559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.283591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.283866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.283899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.284105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.284139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.284339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.284373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.284647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.284680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.284829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.284861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.285139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.285172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.285428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.285462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.285659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.285692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.285982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.286016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.286294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.199 [2024-11-20 09:59:36.286328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.199 qpair failed and we were unable to recover it. 00:27:13.199 [2024-11-20 09:59:36.286462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.286495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.286699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.286732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.286984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.287019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.287207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.287239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.287520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.287554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.287743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.287775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.288035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.288068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.288346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.288378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.288570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.288602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.288861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.288893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.289190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.289223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.289497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.289530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.289735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.289769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.289987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.290022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.290216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.290248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.290448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.290480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.290692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.290725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.290987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.291021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.291293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.291324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.291553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.291585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.291786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.291819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.292096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.292129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.292310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.292342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.292612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.292651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.292862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.292895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.293103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.293136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.293316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.293348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.293631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.293663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.293857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.293892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.294167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.294202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.294454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.294487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.294738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.294769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.295019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.295052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.295329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.295361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.295487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.295521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.295768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.295800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.296080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.296114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.296363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.200 [2024-11-20 09:59:36.296395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.200 qpair failed and we were unable to recover it. 00:27:13.200 [2024-11-20 09:59:36.296600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.296631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.296821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.296855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.297066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.297099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.297302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.297333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.297607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.297638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.297830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.297862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.298148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.298182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.298308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.298342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.298542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.298574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.298824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.298855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.299125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.299159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.299445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.299478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.299750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.299782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.300034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.300067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.300343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.300376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.300582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.300614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.300868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.300901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.301096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.301127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.301326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.301361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.301612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.301643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.301779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.301810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.302035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.302099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.302239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.302271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.302471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.302504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.302697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.302728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.303016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.303058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.303245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.303278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.303459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.303491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.303693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.303724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.304012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.304046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.304343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.304376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.304650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.304684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.201 qpair failed and we were unable to recover it. 00:27:13.201 [2024-11-20 09:59:36.304939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.201 [2024-11-20 09:59:36.304982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.305122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.305156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.305460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.305495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.305752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.305784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.305993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.306026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.306211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.306245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.306425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.306458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.306691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.306725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.306853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.306884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.307100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.307134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.307383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.307417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.307659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.307693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.307972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.308006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.308229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.308264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.308552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.308586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.308729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.308764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.309041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.309077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.309212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.309246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.309447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.309481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.309705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.309738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.309875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.309907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.310137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.310169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.310386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.310419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.310613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.310646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.310869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.310901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.311057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.311091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.311238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.311271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.311468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.311500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.311724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.311757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.311872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.311904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.312062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.312096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.312236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.312269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.312547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.312579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.312762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.312799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.312936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.312981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.313182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.313216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.313483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.313515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.313659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.202 [2024-11-20 09:59:36.313691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.202 qpair failed and we were unable to recover it. 00:27:13.202 [2024-11-20 09:59:36.313883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.313916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.314141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.314175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.314481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.314515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.314702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.314735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.314866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.314898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.315102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.315134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.315331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.315362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.315544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.315577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.315696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.315730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.315969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.316005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.316195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.316227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.316451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.316482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.316688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.316721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.316867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.316900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.317102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.317134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.317352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.317384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.317515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.317548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.317815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.317848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.317991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.318026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.318300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.318333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.318529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.318562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.318813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.318844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.318984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.319018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.319298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.319330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.319531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.319565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.319773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.319805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.319939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.319981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.320110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.320143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.320396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.320429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.320660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.320692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.320888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.320919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.321217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.321249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.203 [2024-11-20 09:59:36.321523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.203 [2024-11-20 09:59:36.321555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.203 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.321844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.321876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.322122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.322156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.322287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.322326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.322474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.322506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.322777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.322809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.323087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.323120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.323322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.323355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.323500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.323532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.323746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.323778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.324052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.324086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.324226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.324258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.324530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.324564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.324755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.324786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.325055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.325090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.325292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.325326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.325448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.325480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.325724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.325756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.326059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.326094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.326353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.326385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.326640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.326673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.326965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.326999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.327297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.327329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.327588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.327621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.327972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.328007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.328325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.328356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.328640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.328674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.328965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.329000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.329216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.329248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.329364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.329397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.329601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.329634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.329819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.329851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.329979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.330013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.330199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.204 [2024-11-20 09:59:36.330232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.204 qpair failed and we were unable to recover it. 00:27:13.204 [2024-11-20 09:59:36.330496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.330529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.330780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.330811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.331061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.331096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.331394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.331427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.331694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.331726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.331980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.332014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.332210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.332243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.332464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.332496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.332770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.332803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.333088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.333130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.333313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.333347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.333618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.333651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.333868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.333899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.334177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.334211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.334394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.334426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.334680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.334715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.334998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.335034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.335313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.335347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.335531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.335563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.335756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.335788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.336064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.336099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.336378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.336410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.336694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.336727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.336876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.336910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.337215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.337251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.337504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.337535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.337739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.337773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.338066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.338099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.338374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.338408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.338696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.338731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.338934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.338980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.339238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.339272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.339553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.339585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.339850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.339882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.340184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.340221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.340407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.340441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.340805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.340885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.205 [2024-11-20 09:59:36.341199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.205 [2024-11-20 09:59:36.341238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.205 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.341518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.341552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.341770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.341805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.342087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.342122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.342272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.342306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.342526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.342558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.342775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.342808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.343106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.343141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.343411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.343446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.343658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.343691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.343934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.343979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.344198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.344232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.344453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.344486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.344699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.344731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.344910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.344943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.345214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.345246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.345525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.345559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.345810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.345842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.346106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.346142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.346440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.346473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.346614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.346646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.346901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.346937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.347172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.347206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.347331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.347362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.347562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.347593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.347784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.347816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.348008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.348049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.348232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.348264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.348517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.348552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.348750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.348781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.348978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.349011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.349211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.349244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.349492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.349525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.349776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.349808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.350003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.350036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.350270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.350302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.350564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.350598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.350886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.350918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.351201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.206 [2024-11-20 09:59:36.351234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.206 qpair failed and we were unable to recover it. 00:27:13.206 [2024-11-20 09:59:36.351490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.351521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.351805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.351838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.352087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.352120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.352418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.352452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.352656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.352688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.352889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.352922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.353132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.353165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.353348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.353382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.353516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.353549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.353774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.353805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.354088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.354124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.354334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.354367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.354634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.354666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.354931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.354973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.355268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.355306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.355500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.355532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.355781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.355813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.356006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.356040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.356260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.356293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.356594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.356627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.356898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.356931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.357194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.357227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.357417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.357450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.357644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.357677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.357926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.357968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.358167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.358200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.358382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.358414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.358711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.358745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.358916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.358974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.359256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.359291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.359552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.359585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.359879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.359913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.360187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.360222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.360348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.360383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.360638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.360670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.360933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.360981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.361256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.361289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.361480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.361511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.207 qpair failed and we were unable to recover it. 00:27:13.207 [2024-11-20 09:59:36.361773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.207 [2024-11-20 09:59:36.361807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.362008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.362043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.362315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.362347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.362554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.362587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.362845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.362878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.363182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.363217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.363507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.363541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.363736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.363768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.364053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.364088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.364305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.364338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.364619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.364652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.364853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.364884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.365104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.365141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.365326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.365357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.365556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.365589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.365862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.365894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.366156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.366191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.366440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.366478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.366780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.366812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.367020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.367055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.367217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.367248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.367475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.367507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.367638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.367671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.367886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.367920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.368185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.368218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.368471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.368505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.368760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.368791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.368986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.369019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.369223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.369257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.369504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.369537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.369814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.369848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.370159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.370194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.370413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.370446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.208 [2024-11-20 09:59:36.370710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.208 [2024-11-20 09:59:36.370743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.208 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.371013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.371047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.371349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.371383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.371563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.371595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.371787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.371819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.372071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.372104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.372301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.372334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.372553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.372584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.372897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.372928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.373213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.373246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.373429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.373459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.373706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.373738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.374001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.374034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.374261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.374293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.374593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.374624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.374900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.374932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.375231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.375263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.375490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.375522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.375800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.375833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.375986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.376020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.376175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.376206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.376404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.376436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.376652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.376682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.376986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.377021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.377296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.377327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.377592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.377626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.377845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.377877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.378130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.378164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.378444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.378476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.378773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.378804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.379078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.379112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.379350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.379382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.379661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.379692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.379885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.379916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.380136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.380168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.380315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.380347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.380555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.380587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.380839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.380870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.380999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.209 [2024-11-20 09:59:36.381033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.209 qpair failed and we were unable to recover it. 00:27:13.209 [2024-11-20 09:59:36.381291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.381322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.381457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.381489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.381742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.381774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.382073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.382106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.382306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.382337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.382539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.382570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.382819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.382851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.383150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.383183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.383392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.383422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.383534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.383565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.383815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.383846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.384065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.384096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.384326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.384358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.384538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.384575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.384776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.384807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.385073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.385106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.385236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.385268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.385486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.385517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.385851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.385882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.386027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.386060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.386309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.386340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.386619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.386651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.386853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.386884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.387100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.387134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.387388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.387420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.387737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.387769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.388021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.388054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.388337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.388368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.388574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.388606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.388796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.388828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.389085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.389120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.389319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.389350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.389487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.389518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.389799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.389830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.390090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.390123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.390278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.390309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.390513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.390544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.390853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.210 [2024-11-20 09:59:36.390885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.210 qpair failed and we were unable to recover it. 00:27:13.210 [2024-11-20 09:59:36.391112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.391145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.391350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.391381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.391646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.391678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.391937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.391981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.392179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.392210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.392432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.392462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.392672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.392703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.392923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.392977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.393283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.393315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.393544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.393576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.393910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.393942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.394206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.394238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.394534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.394566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.394864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.394896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.395168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.395202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.395336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.395367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.395615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.395652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.395899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.395932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.396135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.396167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.396438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.396470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.396767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.396799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.397073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.397107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.397300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.397332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.397535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.397567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.397758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.397789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.398080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.398112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.398300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.398332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.398627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.398660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.398878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.398909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.399126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.399159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.399416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.399448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.399592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.399623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.399893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.399924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.400181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.400213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.400483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.400515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.400651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.400682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.400990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.401024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.401280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.401311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.211 qpair failed and we were unable to recover it. 00:27:13.211 [2024-11-20 09:59:36.401455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.211 [2024-11-20 09:59:36.401487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.401758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.401790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.401978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.402010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.402281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.402313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.402511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.402543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.402815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.402852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.403129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.403163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.403444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.403476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.403749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.403780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.404005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.404038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.404218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.404249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.404512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.404543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.404796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.404827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.405101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.405134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.405383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.405415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.405620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.405651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.405926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.405969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.406226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.406258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.406535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.406566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.406857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.406889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.407103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.407137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.407396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.407429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.407725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.407758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.407974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.408008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.408208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.408240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.408511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.408542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.408741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.408773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.409034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.409068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.409287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.409319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.409569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.409600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.409858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.409890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.410115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.410148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.410407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.410439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.410728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.410761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.410997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.411031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.411180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.411212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.411431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.411463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.212 [2024-11-20 09:59:36.411777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.212 [2024-11-20 09:59:36.411808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.212 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.412015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.412048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.412268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.412300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.412521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.412553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.412828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.412859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.413075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.413111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.413345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.413377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.413600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.413635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.413885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.413918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.414074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.414112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.414405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.414439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.414666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.414698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.414902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.414939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.415246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.415280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.415474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.415506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.415802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.415833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.416031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.416064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.416211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.416243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.416495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.416527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.416806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.416840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.417069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.417102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.417310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.417343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.417544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.417576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.417855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.417889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.418036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.418069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.418371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.418405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.418561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.418592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.418845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.418876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.419151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.419185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.419386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.419417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.419693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.419725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.420026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.420062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.420263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.420294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.420546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.420580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.420833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.420865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.213 [2024-11-20 09:59:36.421144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.213 [2024-11-20 09:59:36.421178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.213 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.421369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.421411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.421539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.421572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.421839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.421871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.422150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.422186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.422398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.422429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.422708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.422741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.423038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.423074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.423297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.423331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.423549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.423583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.423872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.423906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.424121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.424154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.424349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.424383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.424657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.424689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.424973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.425009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.425294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.425328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.425516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.425548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.425670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.425704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.425905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.425937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.426184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.426218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.426403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.426435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.426687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.426719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.426941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.426989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.427132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.427164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.427438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.427471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.427678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.427710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.427924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.427972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.428225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.428258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.428462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.428493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.428721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.428754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.428968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.429002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.429207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.429240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.429367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.429400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.429699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.429730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.430025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.430061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.430356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.430389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.430643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.430675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.430892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.430924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.431131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.214 [2024-11-20 09:59:36.431166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.214 qpair failed and we were unable to recover it. 00:27:13.214 [2024-11-20 09:59:36.431294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.431328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.431526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.431560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.431770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.431803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.432005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.432052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.432276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.432309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.432492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.432523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.432741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.432775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.432907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.432939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.433203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.433236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.433514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.433547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.433744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.433776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.433985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.434018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.434176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.434210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.434489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.434520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.434795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.434828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.435123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.435157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.435349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.435382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.435575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.435607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.435807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.435840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.435976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.436010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.436234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.436267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.436451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.436483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.436622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.436654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.436904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.436937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.437209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.437243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.437391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.437425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.437647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.437678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.437827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.437860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.438008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.438044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.438297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.438330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.438505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.438542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.438798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.438833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.439102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.439137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.439360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.439392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.439525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.439557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.439745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.439780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.440032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.440067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.440345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.215 [2024-11-20 09:59:36.440377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.215 qpair failed and we were unable to recover it. 00:27:13.215 [2024-11-20 09:59:36.440579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.440612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.440808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.440840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.441054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.441086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.441290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.441323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.441528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.441561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.441746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.441779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.441983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.442019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.442226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.442259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.442528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.442560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.442753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.442792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.442920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.442966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.443221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.443254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.443367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.443398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.443588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.443620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.443889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.443923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.444085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.444119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.444305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.444339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.444609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.444641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.444866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.444898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.445175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.445210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.445354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.445388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.445664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.445697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.445973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.446008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.446295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.446332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.446541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.446572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.446813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.446845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.447070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.447105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.447230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.447261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.447453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.447485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.447745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.447778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.447969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.448004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.448144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.448178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.448379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.448410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.448559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.448597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.448885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.448919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.449077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.449112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.449307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.449340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.449546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.449577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.216 qpair failed and we were unable to recover it. 00:27:13.216 [2024-11-20 09:59:36.449789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.216 [2024-11-20 09:59:36.449821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.450103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.450138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.450393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.450426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.450559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.450592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.450734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.450765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.450960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.450995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.451200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.451231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.451373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.451405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.451634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.451669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.451875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.451908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.452054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.452087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.452280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.452311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.452569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.452601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.452855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.452889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.453137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.453169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.453446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.453480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.453817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.453852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.453974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.454008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.454212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.454245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.454465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.454498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.454702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.454733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.454986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.455019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.455163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.455193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.455484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.455517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.455738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.455770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.455982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.456016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.456152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.456192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.456386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.456418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.456704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.456737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.456966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.457005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.457140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.457170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.457395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.457430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.457647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.457684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.457879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.217 [2024-11-20 09:59:36.457912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.217 qpair failed and we were unable to recover it. 00:27:13.217 [2024-11-20 09:59:36.458198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.458234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.458440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.458476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.458725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.458760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.459059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.459096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.459357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.459391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.459610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.459643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.459917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.459962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.460218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.460258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.460469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.460501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.460645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.460677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.460925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.460970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.461258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.461294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.461496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.461531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.461785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.461820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.462049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.462085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.462280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.462313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.462625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.462657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.462859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.462892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.463090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.463123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.463398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.463431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.463719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.463750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.464052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.464087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.464246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.464278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.464404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.464435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.464570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.464601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.464794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.464826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.465080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.465114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.465365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.465397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.465608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.465639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.465926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.465998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.466153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.466185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.466329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.466361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.466641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.466673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.466868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.466899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.467165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.467198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.467465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.467496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.467713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.467744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.467942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.467991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.218 [2024-11-20 09:59:36.468243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.218 [2024-11-20 09:59:36.468274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.218 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.468489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.468521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.468767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.468799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.469055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.469089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.469284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.469316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.469526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.469559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.469822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.469855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.470059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.470094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.470295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.470326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.470600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.470631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.470890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.470922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.471133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.471165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.471365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.471396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.471526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.471557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.471842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.471874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.472067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.472101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.472255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.472287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.472500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.472531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.472726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.472757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.472972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.473007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.473213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.473244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.473517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.473549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.473806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.473848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.474161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.474195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.474470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.474502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.474779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.474810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.475069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.475103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.475306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.475338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.475462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.475493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.475763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.475794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.476013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.476047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.476251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.476282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.476591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.476624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.476910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.476942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.477096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.477128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.477380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.477412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.477694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.477725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.477925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.477968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.478228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.219 [2024-11-20 09:59:36.478260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.219 qpair failed and we were unable to recover it. 00:27:13.219 [2024-11-20 09:59:36.478478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.478509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.478768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.478800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.479000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.479033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.479304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.479335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.479537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.479568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.479825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.479856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.480081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.480113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.480273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.480304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.480511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.480543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.480817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.480848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.481103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.481136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.481339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.481370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.481501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.481533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.481805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.481837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.482077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.482110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.482313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.482344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.482547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.482579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.482714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.482745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.482997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.483030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.483234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.483266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.483471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.483514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.483759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.483791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.483931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.483972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.484257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.484289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.484472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.484503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.484727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.484758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.485034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.485067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.485270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.485301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.485436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.485467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.485742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.485773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.486034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.486067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.486268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.486300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.486493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.486524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.486729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.486761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.487037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.487070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.487217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.487248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.487445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.487476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.487751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.487782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.220 qpair failed and we were unable to recover it. 00:27:13.220 [2024-11-20 09:59:36.488010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.220 [2024-11-20 09:59:36.488043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.488173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.488205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.488453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.488484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.488704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.488735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.488990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.489023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.489216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.489246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.489509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.489540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.489836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.489867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.490147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.490181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.490347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.490379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.490577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.490609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.490719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.490749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.491011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.491046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.491257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.491288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.491479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.491511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.491695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.491727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.491985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.492018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.492218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.492250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.492543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.492576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.492798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.492829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.221 qpair failed and we were unable to recover it. 00:27:13.221 [2024-11-20 09:59:36.493027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.221 [2024-11-20 09:59:36.493061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.493338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.493372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.493599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.493632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.493855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.493893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.494105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.494139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.494278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.494310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.494511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.494542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.494683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.494714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.495001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.495034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.495257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.495288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.495475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.495506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.495705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.495736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.496014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.496046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.496195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.507 [2024-11-20 09:59:36.496226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.507 qpair failed and we were unable to recover it. 00:27:13.507 [2024-11-20 09:59:36.496410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.496442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.496740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.496771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.497050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.497084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.497368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.497401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.497619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.497651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.497904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.497935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.498161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.498194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.498445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.498476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.498744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.498775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.498995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.499028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.499182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.499214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.499464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.499494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.499687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.499719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.499971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.500005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.500288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.500320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.500573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.500605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.500783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.500820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.501098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.501132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.501382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.501414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.501732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.501765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.501968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.502000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.502122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.502154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.502364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.502397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.502613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.502644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.502846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.502877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.503128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.503162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.508 [2024-11-20 09:59:36.503355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.508 [2024-11-20 09:59:36.503386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.508 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.503508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.503539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.503747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.503778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.504091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.504124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.504322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.504354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.504603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.504635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.504789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.504821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.505097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.505129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.505402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.505434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.505798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.505829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.506033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.506065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.506314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.506345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.506622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.506654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.506901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.506932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.507142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.507174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.507352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.507384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.507525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.507556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.507842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.507874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.508155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.508189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.508399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.508430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.508674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.508705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.508909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.508940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.509206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.509239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.509441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.509472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.509777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.509809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.510074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.510107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.510310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.509 [2024-11-20 09:59:36.510341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.509 qpair failed and we were unable to recover it. 00:27:13.509 [2024-11-20 09:59:36.510596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.510627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.510873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.510905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.511027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.511059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.511339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.511371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.511531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.511568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.511708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.511740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.511970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.512003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.512225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.512256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.512508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.512541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.512791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.512822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.513063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.513096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.513350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.513383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.513530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.513560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.513756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.513787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.513913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.513945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.514229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.514261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.514511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.514542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.514684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.514716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.514936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.514995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.515154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.515185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.515327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.515358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.515666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.515699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.515971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.516004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.516261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.516292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.516483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.516513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.516827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.516859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.510 [2024-11-20 09:59:36.517065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.510 [2024-11-20 09:59:36.517098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.510 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.517324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.517356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.517609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.517642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.517760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.517790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.517992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.518024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.518230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.518267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.518478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.518509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.518783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.518814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.519055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.519088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.519232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.519264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.519469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.519500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.519814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.519845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.520123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.520155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.520364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.520395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.520541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.520572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.520842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.520873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.521161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.521195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.521443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.521474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.521706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.521737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.522038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.522072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.522253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.522284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.522483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.522515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.522793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.522824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.523004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.523037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.523321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.523354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.523589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.523620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.523892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.511 [2024-11-20 09:59:36.523923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.511 qpair failed and we were unable to recover it. 00:27:13.511 [2024-11-20 09:59:36.524156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.524189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.524395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.524426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.524700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.524731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.524925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.524986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.525249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.525281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.525428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.525458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.525664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.525695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.525921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.525965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.526217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.526249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.526451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.526482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.526733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.526764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.527036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.527070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.527337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.527369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.527560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.527590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.527782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.527814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.528073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.528106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.528257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.528289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.528541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.528572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.528908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.528940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.529174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.529212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.529510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.529541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.529721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.529753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.529943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.529986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.530258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.530289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.530544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.530576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.530852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.530883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.512 qpair failed and we were unable to recover it. 00:27:13.512 [2024-11-20 09:59:36.531165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.512 [2024-11-20 09:59:36.531199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.531387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.531420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.531676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.531707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.531913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.531945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.532108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.532139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.532421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.532452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.532644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.532675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.532940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.532987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.533260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.533291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.533495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.533526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.533719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.533750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.533937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.533979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.534168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.534198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.534378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.534409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.534609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.534640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.534926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.534971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.535177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.535209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.535495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.535526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.535818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.535850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.536134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.536168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.536373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.536415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.536608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.536640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.536985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.537019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.537174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.537205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.537359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.537390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.537609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.537640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.513 qpair failed and we were unable to recover it. 00:27:13.513 [2024-11-20 09:59:36.537934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.513 [2024-11-20 09:59:36.537979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.538250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.538281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.538487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.538519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.538826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.538857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.539056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.539091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.539253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.539284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.539558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.539589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.539798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.539830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.540083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.540118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.540256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.540288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.540488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.540519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.540807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.540838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.541164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.541197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.541470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.541501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.541724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.541755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.541934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.541976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.542106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.542138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.542391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.542422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.542567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.542599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.542872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.542904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.543217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.543249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.543391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.514 [2024-11-20 09:59:36.543423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.514 qpair failed and we were unable to recover it. 00:27:13.514 [2024-11-20 09:59:36.543745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.543775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.544023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.544057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.544287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.544318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.544591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.544623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.544910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.544942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.545238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.545270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.545474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.545506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.545702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.545734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.545934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.545977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.546242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.546274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.546531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.546563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.546705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.546736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.547008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.547040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.547297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.547335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.547642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.547673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.547879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.547909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.548196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.548229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.548481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.548512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.548691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.548722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.548915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.548959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.549167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.549198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.549405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.549436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.549671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.549702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.549921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.549963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.550162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.550194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.550336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.550366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.550625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.550656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.550874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.550906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.551062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.551095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.551249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.551281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.551581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.551613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.551895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.551926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.552216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.552248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.552372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.552403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.552603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.552635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.515 [2024-11-20 09:59:36.552894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.515 [2024-11-20 09:59:36.552926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.515 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.553158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.553192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.553340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.553371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.553708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.553740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.553961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.553994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.554255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.554287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.554438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.554470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.554779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.554810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.555098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.555132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.555248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.555280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.555468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.555499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.555633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.555664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.555971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.556004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.556163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.556194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.556326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.556356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.556536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.556568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.556837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.556867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.557169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.557203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.557416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.557448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.557578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.557609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.557792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.557823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.558124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.558156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.558305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.558337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.558472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.558503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.558705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.558736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.558963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.558998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.559189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.559219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.559411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.559443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.559730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.559761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.560042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.560076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.560324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.560354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.560507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.560539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.560755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.560787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.561013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.561046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.561271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.561303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.561497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.561528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.561732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.561763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.561991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.516 [2024-11-20 09:59:36.562025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.516 qpair failed and we were unable to recover it. 00:27:13.516 [2024-11-20 09:59:36.562161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.562191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.562494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.562525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.562810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.562842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.563093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.563126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.563327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.563359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.563557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.563589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.563793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.563825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.564088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.564121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.564356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.564394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.564637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.564668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.564971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.565005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.565157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.565189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.565379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.565410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.565701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.565732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.566017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.566050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.566306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.566337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.566533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.566565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.566757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.566787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.567015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.567048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.567235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.567266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.567408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.567440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.567664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.567695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.567880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.567911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.568205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.568239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.568495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.568527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.568793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.568826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.569009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.569042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.569258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.569289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.569472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.569502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.569712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.569743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.569938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.569980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.570183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.570214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.570508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.570539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.570789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.570821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.571036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.571070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.571211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.571243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.571521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.571553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.571749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.517 [2024-11-20 09:59:36.571780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.517 qpair failed and we were unable to recover it. 00:27:13.517 [2024-11-20 09:59:36.572100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.572134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.572342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.572373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.572523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.572554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.572751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.572782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.573034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.573067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.573298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.573329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.573599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.573631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.573878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.573910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.574232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.574265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.574389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.574419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.574619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.574650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.574912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.574960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.575247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.575279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.575493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.575525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.575783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.575815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.575940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.575982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.576200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.576231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.576393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.576425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.576719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.576751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.576998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.577033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.577246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.577277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.577423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.577453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.577765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.577796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.578057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.578090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.578295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.578326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.578473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.578504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.578711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.578742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.578968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.579002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.579195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.579226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.579461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.579492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.579827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.579859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.580150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.580183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.580458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.580489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.580720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.580751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.580934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.580979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.581174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.581205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.581490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.581522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.581801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.581832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.518 [2024-11-20 09:59:36.582142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.518 [2024-11-20 09:59:36.582180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.518 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.582337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.582369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.582700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.582732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.583002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.583035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.583230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.583261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.583455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.583487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.583755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.583786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.583978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.584012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.584275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.584308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.584509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.584541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.584805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.584836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.585050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.585084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.585305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.585336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.585485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.585517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.585720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.585752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.585944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.585991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.586243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.586276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.586477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.586509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.586804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.586836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.587099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.587132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.587278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.587309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.587508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.587540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.587683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.587714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.587893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.587925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.588109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.588142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.588376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.588408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.588587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.588619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.588895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.588925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.589201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.589233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.589388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.589420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.589596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.589628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.589853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.589884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.590089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.590121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.519 qpair failed and we were unable to recover it. 00:27:13.519 [2024-11-20 09:59:36.590373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.519 [2024-11-20 09:59:36.590405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.590603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.590634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.590765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.590796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.591000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.591034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.591178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.591209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.591402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.591434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.591566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.591598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.591896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.591927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.592172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.592211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.592476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.592508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.592708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.592739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.593005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.593037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.593181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.593213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.593492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.593524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.593740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.593771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.593969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.594002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.594252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.594283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.594431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.594462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.594719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.594750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.595024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.595058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.595311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.595342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.595544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.595574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.595853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.595885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.596110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.596143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.596368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.596399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.596629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.596660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.596838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.596869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.597143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.597176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.597323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.597354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.597500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.597531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.597724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.597756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.597936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.597980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.598199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.598231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.598432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.598464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.598669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.598700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.598966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.599005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.599234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.599266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.599414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.599445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.520 [2024-11-20 09:59:36.599630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.520 [2024-11-20 09:59:36.599660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.520 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.599916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.599962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.600171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.600204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.600353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.600384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.600581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.600613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.600889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.600921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.601136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.601168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.601289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.601320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.601445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.601476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.601741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.601773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.602067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.602100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.602359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.602392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.602657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.602688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.603001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.603033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.603190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.603222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.603472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.603503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.603719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.603752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.604040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.604072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.604275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.604306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.604510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.604542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.604731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.604762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.605038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.605071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.605322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.605355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.605498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.605530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.605807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.605839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.606152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.606187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.606331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.606362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.606519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.606551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.606674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.606706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.606838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.606869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.607078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.607112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.607255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.607287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.607413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.607445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.607757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.607788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.608046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.608081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.608386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.608418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.608648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.608680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.608912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.608943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.609110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.521 [2024-11-20 09:59:36.609148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.521 qpair failed and we were unable to recover it. 00:27:13.521 [2024-11-20 09:59:36.609330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.609362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.609595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.609626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.609820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.609853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.609996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.610030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.610244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.610276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.610574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.610607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.610831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.610862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.611062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.611096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.611290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.611320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.611599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.611631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.611892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.611923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.612197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.612230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.612478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.612510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.612801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.612833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.613079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.613111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.613386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.613418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.613649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.613681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.613879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.613910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.614224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.614263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.614391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.614423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.614569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.614600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.614878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.614909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.615196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.615229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.615487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.615517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.615766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.615798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.616067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.616101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.616230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.616268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.616480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.616512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.616854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.616885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.617166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.617199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.617335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.617366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.617497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.617528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.617798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.617830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.618033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.618066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.618264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.618295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.618502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.618534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.618731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.618762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.522 qpair failed and we were unable to recover it. 00:27:13.522 [2024-11-20 09:59:36.618967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.522 [2024-11-20 09:59:36.619000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.619270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.619301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.619512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.619543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.619777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.619809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.620029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.620062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.620204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.620236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.620497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.620528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.620675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.620706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.620852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.620883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.621112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.621144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.621273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.621305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.621728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.621765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.621986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.622024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.622239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.622271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.622400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.622431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.622684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.622716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.622969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.623003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.623153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.623185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.623329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.623361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.623644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.623675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.623877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.623908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.624203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.624236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.624383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.624414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.624732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.624764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.624967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.625002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.625131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.625163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.625418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.625450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.625586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.625616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.625735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.625766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.625900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.625930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.626134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.626174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.626434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.626466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.626613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.626643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.626857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.626889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.523 qpair failed and we were unable to recover it. 00:27:13.523 [2024-11-20 09:59:36.627130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.523 [2024-11-20 09:59:36.627163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.627368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.627399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.627680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.627711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.627896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.627927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.628140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.628173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.628377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.628408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.628533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.628564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.628768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.628799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.629084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.629118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.629377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.629408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.629686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.629717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.629970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.630006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.630207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.630237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.630445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.630477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.630778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.630809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.631008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.631041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.631243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.631275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.631471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.631503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.631805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.631838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.632048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.632082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.632359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.632392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.632703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.632735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.632971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.633006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.633200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.633233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.633493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.633525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.633741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.633774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.634019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.634054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.634238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.634271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.634465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.634497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.634698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.634730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.634961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.634994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.635190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.635222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.635422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.635453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.635653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.635684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.635938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.635982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.636173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.636206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.524 [2024-11-20 09:59:36.636406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.524 [2024-11-20 09:59:36.636438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.524 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.636808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.636899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.637234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.637273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.637413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.637444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.637652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.637683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.637973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.638007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.638194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.638226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.638430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.638462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.638735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.638767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.639042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.639076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.639269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.639302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.639428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.639459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.639642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.639674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.640012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.640045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.640253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.640302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.640500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.640531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.640671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.640701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.640987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.641019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.641238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.641270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.641553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.641584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.641871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.641904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.642217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.642251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.642383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.642414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.642645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.642677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.642942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.642983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.643135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.643168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.643365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.643397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.643709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.643741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.643962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.643995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.644180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.644211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.644437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.644469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.644676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.644707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.644884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.644916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.645100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.645134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.645268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.645298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.525 [2024-11-20 09:59:36.645490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.525 [2024-11-20 09:59:36.645521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.525 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.645746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.645778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.645926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.645972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.646179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.646212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.646514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.646544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.646739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.646772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.647033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.647067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.647272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.647302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.647577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.647609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.647750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.647782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.648002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.648033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.648240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.648272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.648386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.648416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.648652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.648683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.648815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.648847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.649030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.649063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.649343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.649376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.649576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.649608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.649878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.649910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.650137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.650177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.650384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.650416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.650611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.650642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.650914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.650945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.651164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.651196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.651332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.651364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.651490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.651520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.651721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.651753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.651986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.652020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.652262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.652295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.652478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.652510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.652718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.652749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.652946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.652999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.653155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.653185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.653446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.653478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.653789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.653821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.654098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.654130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.654325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.654357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.654644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.654675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.526 qpair failed and we were unable to recover it. 00:27:13.526 [2024-11-20 09:59:36.654861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.526 [2024-11-20 09:59:36.654891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.655108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.655141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.655341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.655371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.655697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.655730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.655873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.655904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.656151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.656183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.656390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.656423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.656738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.656769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.657026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.657066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.657269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.657301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.657566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.657597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.657796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.657828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.658035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.658069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.658265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.658297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.658547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.658579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.658829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.658861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.659136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.659170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.659361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.659393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.659660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.659691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.659992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.660026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.660227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.660260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.660385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.660416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.660599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.660631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.660915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.660946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.661155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.661188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.661416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.661449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.661677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.661709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.661975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.662009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.662224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.662257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.662513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.662545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.662708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.662740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.662939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.662980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.663163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.663195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.663388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.663421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.663745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.663779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.664008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.664043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.664264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.664296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.664430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.527 [2024-11-20 09:59:36.664461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.527 qpair failed and we were unable to recover it. 00:27:13.527 [2024-11-20 09:59:36.664741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.664774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.664990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.665025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.665297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.665330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.665542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.665577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.665752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.665785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.666006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.666040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.666293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.666327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.666459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.666489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.666720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.666753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.666964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.666996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.667149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.667186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.667393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.667427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.667739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.667770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.668045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.668078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.668334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.668365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.668571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.668603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.668873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.668906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.669053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.669086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.669340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.669373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.669487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.669519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.669726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.669758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.670030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.670065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.670208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.670241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.670372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.670404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.670615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.670650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.670869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.670901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.671162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.671195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.671470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.671501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.671784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.671816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.672103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.672138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.672342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.672377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.672574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.672606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.528 [2024-11-20 09:59:36.672878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.528 [2024-11-20 09:59:36.672912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.528 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.673067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.673101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.673284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.673319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.673543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.673574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.673853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.673884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.674088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.674123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.674342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.674376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.674675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.674707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.674975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.675009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.675275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.675308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.675537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.675571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.675776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.675809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.675979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.676013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.676153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.676188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.676449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.676480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.676678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.676711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.676973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.677007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.677149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.677182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.677454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.677492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.677625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.677656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.677846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.677879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.678054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.678088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.678311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.678342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.678499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.678533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.678831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.678864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.679049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.679082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.679269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.679310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.679535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.679568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.679778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.679812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.680001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.680034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.680185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.680217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.680348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.680381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.680571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.680603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.680824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.680856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.681081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.681114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.681310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.681345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.681532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.681562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.529 qpair failed and we were unable to recover it. 00:27:13.529 [2024-11-20 09:59:36.681750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.529 [2024-11-20 09:59:36.681782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.681918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.681961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.682239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.682271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.682527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.682559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.682680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.682711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.682914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.682946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.683186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.683219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.683420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.683453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.683687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.683720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.683916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.683958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.684164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.684198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.684475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.684508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.684686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.684716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.684997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.685030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.685190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.685222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.685375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.685406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.685595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.685626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.685860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.685892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.686202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.686235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.686432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.686465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.686726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.686758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.686962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.687006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.687235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.687268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.687402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.687436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.687643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.687677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.687868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.687900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.688023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.688053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.688326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.688361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.688545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.688578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.688778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.688809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.689072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.689105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.689243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.689275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.689479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.689513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.689820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.689853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.690151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.690186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.690332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.690364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.690512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.690543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.690728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.530 [2024-11-20 09:59:36.690761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.530 qpair failed and we were unable to recover it. 00:27:13.530 [2024-11-20 09:59:36.691013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.691049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.691301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.691333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.691538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.691571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.691868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.691900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.692063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.692096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.692366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.692401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.692682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.692716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.693005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.693040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.693200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.693233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.693522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.693554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.693755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.693789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.694082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.694117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.694317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.694348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.694599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.694633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.694848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.694880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.695093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.695127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.695280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.695314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.695524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.695556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.695738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.695771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.695889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.695920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.696193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.696223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.696352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.696385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.696517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.696550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.696760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.696798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.696981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.697014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.697240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.697272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.697529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.697560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.697795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.697828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.698032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.698064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.698264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.698297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.698501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.698532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.698727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.698759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.699035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.699069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.699209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.699239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.699370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.699402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.699604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.699636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.699917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.699969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.531 [2024-11-20 09:59:36.700123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.531 [2024-11-20 09:59:36.700156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.531 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.700353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.700385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.700596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.700626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.700823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.700855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.701061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.701095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.701233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.701265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.701444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.701477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.701738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.701771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.702025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.702058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.702196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.702228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.702412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.702446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.702746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.702779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.703010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.703044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.703173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.703207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.703420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.703453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.703633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.703664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.703869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.703901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.704109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.704143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.704360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.704391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.704523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.704555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.704772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.704804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.705077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.705111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.705304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.705337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.705538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.705571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.705736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.705767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.706008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.706041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.706187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.706227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.706425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.706457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.706680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.706711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.706967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.707002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.707220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.707253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.707446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.707477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.707671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.707703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.707928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.707971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.708132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.708163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.708353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.708385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.708648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.708681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.708824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.708854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.532 qpair failed and we were unable to recover it. 00:27:13.532 [2024-11-20 09:59:36.709119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.532 [2024-11-20 09:59:36.709153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.709321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.709354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.709552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.709586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.709795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.709828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.710109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.710141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.710282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.710314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.710524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.710557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.710824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.710857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.711073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.711106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.711251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.711283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.711474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.711506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.711767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.711798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.712095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.712128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.712294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.712326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.712459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.712490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.712754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.712786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.712921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.712966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.713094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.713127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.713259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.713296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.713445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.713478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.713674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.713707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.713981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.714015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.714286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.714319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.714458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.714489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.714673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.714705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.714993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.715027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.715171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.715203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.715386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.715418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.715710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.715749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.716022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.716055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.716261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.716294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.716489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.716522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.716655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.716687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.716805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.533 [2024-11-20 09:59:36.716837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.533 qpair failed and we were unable to recover it. 00:27:13.533 [2024-11-20 09:59:36.716986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.717022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.717176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.717210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.717419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.717450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.717722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.717754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.717895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.717926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.718118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.718154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.718270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.718301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.718458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.718491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.718764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.718797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.718929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.718975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.719183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.719213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.719455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.719486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.719617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.719649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.719966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.720000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.720193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.720227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.720420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.720451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.720655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.720687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.720889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.720922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.721150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.721184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.721367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.721399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.721605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.721637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.721914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.721959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.722084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.722115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.722302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.722335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.722549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.722581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.722761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.722792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.723011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.723045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.723193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.723225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.723406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.723439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.723664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.723695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.723992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.724028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.724174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.724206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.724334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.724364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.724488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.724520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.724730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.724771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.724922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.724961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.725105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.725137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.534 [2024-11-20 09:59:36.725269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.534 [2024-11-20 09:59:36.725300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.534 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.725442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.725474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.725692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.725724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.725887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.725920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.726055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.726086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.726235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.726268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.726472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.726505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.726766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.726799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.727072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.727106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.727422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.727455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.727602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.727636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.727838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.727876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.728130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.728163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.728322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.728358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.728540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.728573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.728785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.728816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.728972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.729003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.729214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.729247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.729425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.729457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.729766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.729798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.729960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.729992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.730207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.730240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.730398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.730431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.730562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.730592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.730826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.730904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.731116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.731184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.732337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.732383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.732739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.732773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.732922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.732968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.733121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.733153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.733290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.733319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.733465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.733494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.733791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.733822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.733939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.733980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.734132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.734162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.734313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.734343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.734486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.734515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.734715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.535 [2024-11-20 09:59:36.734755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.535 qpair failed and we were unable to recover it. 00:27:13.535 [2024-11-20 09:59:36.734961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.734993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.735183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.735213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.735432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.735463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.735603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.735632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.735849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.735882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.736085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.736117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.736242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.736271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.736404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.736434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.736667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.736701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.736965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.736998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.737136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.737170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.737371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.737405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.737552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.737581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.737772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.737803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.737994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.738026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.738158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.738187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.738336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.738362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.738493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.738518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.738748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.738776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.738975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.739005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.739132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.739158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.739339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.739365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.740852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.740903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.741142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.741171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.741322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.741348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.741493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.741521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.741646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.741673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.741873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.741899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.742091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.742119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.742298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.742323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.742459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.742484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.742797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.742822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.742999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.743027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.743146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.743173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.743294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.743320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.743454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.743480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.743583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.743613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.536 qpair failed and we were unable to recover it. 00:27:13.536 [2024-11-20 09:59:36.743731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.536 [2024-11-20 09:59:36.743757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.743930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.743971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.744212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.744242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.744389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.744417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.744550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.744576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.744864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.744890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.745014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.745041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.745166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.745193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.745315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.745341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.745530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.745556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.745668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.745692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.745966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.745995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.746133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.746160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.746285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.746309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.746445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.746469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.746751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.746776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.746967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.746995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.747119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.747146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.747259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.747286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.747403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.747427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.747573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.747598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.747889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.747914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.748094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.748122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.748246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.748274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.748413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.748442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.748675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.748703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.748893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.748921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.749078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.749106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.749255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.749282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.749535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.749615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.749839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.749876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.537 [2024-11-20 09:59:36.750195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.537 [2024-11-20 09:59:36.750232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.537 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.750442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.750476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.750696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.750729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.750920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.750967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.751177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.751211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.751359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.751392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.753024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.753085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.753255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.753287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.753444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.753476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.753598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.753629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.753794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.753826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.753986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.754021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.754227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.754260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.754394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.754426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.754644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.754676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.754959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.754992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.755139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.755170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.755369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.755402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.755723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.755758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.756043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.756076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.757650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.757710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.758012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.758049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.758215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.758249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.758388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.758419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.758567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.758600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.758725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.758765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.758972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.759006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.759157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.759189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.759345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.759378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.759528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.759560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.759840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.759873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.760021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.760054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.760272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.760305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.760504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.760535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.760782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.760814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.761109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.761142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.761287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.761318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.761462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.761494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.538 qpair failed and we were unable to recover it. 00:27:13.538 [2024-11-20 09:59:36.761796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.538 [2024-11-20 09:59:36.761827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.762095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.762130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.762275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.762307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.762464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.762496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.762693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.762725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.762962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.762997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.763134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.763167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.763394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.763426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.763574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.763606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.763873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.763905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.764141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.764175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.764434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.764466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.764665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.764697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.764850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.764882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.765137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.765170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.765304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.765336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.765569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.765601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.765806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.765838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.766091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.766127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.766324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.766356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.766560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.766596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.766847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.766878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.767071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.767105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.767237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.767268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.767468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.767500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.767707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.767739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.767926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.767967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.768180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.768215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.768459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.768537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.768765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.768802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.769021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.769057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.769183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.769217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.769351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.769382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.769583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.769615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.769896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.769929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.770154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.770190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.770338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.770371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.539 qpair failed and we were unable to recover it. 00:27:13.539 [2024-11-20 09:59:36.770589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.539 [2024-11-20 09:59:36.770621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.770899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.770933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.771097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.771131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.771318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.771350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.771535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.771578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.771859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.771893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.772097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.772131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.772268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.772303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.772519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.772550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.772861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.772895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.773051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.773083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.773287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.773319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.773522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.773555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.773751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.773785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.774068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.774103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.774217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.774250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.774454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.774487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.774708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.774741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.774970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.775005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.775215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.775247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.775406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.775438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.775725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.775756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.775902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.775935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.776070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.776102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.776286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.776317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.776505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.776536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.776742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.776774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.776971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.777006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.777193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.777225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.777506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.777537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.777825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.777859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.778123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.778164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.778460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.778492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.778743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.778777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.779051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.779085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.779338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.779371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.779526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.779558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.540 [2024-11-20 09:59:36.779761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.540 [2024-11-20 09:59:36.779792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.540 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.780071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.780108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.780387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.780419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.780770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.780802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.780990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.781024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.781307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.781338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.781540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.781575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.781770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.781804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.781994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.782029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.782235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.782267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.782430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.782462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.782683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.782716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.782851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.782883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.783006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.783040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.783175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.783207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.783349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.783381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.783578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.783610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.783888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.783921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.784128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.784161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.784288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.784322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.784591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.784624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.784832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.784866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.785132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.785166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.785310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.785342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.785550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.785583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.785780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.785812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.785968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.786003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.786145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.786178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.786357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.786391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.786585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.786617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.786888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.786921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.787115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.787149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.787385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.787420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.787602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.787635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.787831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.787870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.788070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.788106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.788370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.788403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.541 [2024-11-20 09:59:36.788597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.541 [2024-11-20 09:59:36.788631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.541 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.788753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.788787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.788910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.788944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.789108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.789143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.789358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.789393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.789578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.789611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.789860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.789895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.790083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.790117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.790323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.790358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.790606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.790639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.790840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.790873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.791078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.791113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.791316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.791349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.791468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.791500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.791765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.791800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.792048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.792081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.792209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.792241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.792385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.792418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.792712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.792745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.792932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.792992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.793256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.793289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.793506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.793537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.793678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.793710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.793890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.793922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.794145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.794179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.794366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.794398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.794655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.794687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.794876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.794909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.795066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.795100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.795223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.795256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.795396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.795430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.795614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.795646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.542 qpair failed and we were unable to recover it. 00:27:13.542 [2024-11-20 09:59:36.795888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.542 [2024-11-20 09:59:36.795920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.796122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.796156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.796336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.796367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.796497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.796530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.796717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.796750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.797001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.797041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.797241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.797273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.797424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.797457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.797639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.797672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.797860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.797892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.798132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.798165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.798367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.798401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.798667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.798699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.798895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.798928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.799076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.799109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.799297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.799330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.799512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.799544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.799681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.799713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.799912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.799945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.800151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.800183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.800404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.800436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.800575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.800608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.800800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.800832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.801025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.801059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.801315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.801349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.801533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.801566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.801772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.801804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.802006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.802039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.802300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.802333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.802530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.802562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.802698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.802733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.802866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.802899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.803215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.803250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.803391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.803424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.803542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.543 [2024-11-20 09:59:36.803576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.543 qpair failed and we were unable to recover it. 00:27:13.543 [2024-11-20 09:59:36.803688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.803720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.803832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.803864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.804046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.804081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.804330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.804362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.804568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.804601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.804849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.804883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.805009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.805043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.805158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.805191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.805336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.805369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.805557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.805589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.805773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.805811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.805976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.806010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.806237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.806271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.806465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.806498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.806700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.806733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.806921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.806966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.807169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.807203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.807478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.807513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.807711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.807743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.807876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.807910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.808061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.808094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.808284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.808317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.808471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.808502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.808640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.808673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.808967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.809001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.809135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.809168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.809296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.809328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.809586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.809617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.809894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.809926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.810074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.810109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.810239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.810270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.810399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.810431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.810621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.810655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.810934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.810980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.544 [2024-11-20 09:59:36.811204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.544 [2024-11-20 09:59:36.811239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.544 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.811439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.811472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.811656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.811691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.811845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.811878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.812018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.812052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.812207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.812239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.812445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.812478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.812597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.812628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.812902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.812935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.813133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.813167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.813359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.813392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.813577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.813610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.813716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.813749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.813876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.813909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.814082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.814117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.814309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.814342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.814555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.814593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.814847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.814880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.815067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.815101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.815230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.834 [2024-11-20 09:59:36.815263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.834 qpair failed and we were unable to recover it. 00:27:13.834 [2024-11-20 09:59:36.815460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.815491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.815689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.815722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.815853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.815884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.816069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.816101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.816362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.816395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.816696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.816728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.816850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.816883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.817092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.817126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.817323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.817354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.817531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.817563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.817748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.817781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.817971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.818005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.818184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.818217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.818409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.818442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.818641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.818675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.818926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.818966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.819146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.819179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.819374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.819406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.819657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.819690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.819803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.819835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.819974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.820007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.820209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.820240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.820348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.820379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.820565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.820599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.820789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.820820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.821020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.821053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.821187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.821218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.821404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.821436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.821636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.821667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.821780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.821812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.822038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.822072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.822371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.822405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.822680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.822712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.822932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.822973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.823169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.823203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.823320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.823352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.823563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.823601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.835 [2024-11-20 09:59:36.823846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.835 [2024-11-20 09:59:36.823878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.835 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.824162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.824196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.824379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.824411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.824587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.824619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.824811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.824844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.825070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.825104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.825374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.825408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.825679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.825711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.825823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.825855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.826004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.826039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.826218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.826251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.826363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.826416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.826549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.826582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.826833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.826866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.827055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.827089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.827230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.827265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.827500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.827533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.827682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.827714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.827839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.827870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.828076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.828109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.828309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.828342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.828527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.828559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.828776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.828810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.828999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.829032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.829211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.829243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.829448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.829481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.829677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.829709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.829888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.829919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.830166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.830199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.830323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.830355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.830641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.830674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.830873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.830907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.831097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.831130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.831277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.831310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.831500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.831531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.831739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.831771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.831883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.831916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.832048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.832082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.836 [2024-11-20 09:59:36.832354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.836 [2024-11-20 09:59:36.832385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.836 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.832571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.832611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.832798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.832831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.833080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.833114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.833234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.833268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.833485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.833516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.833696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.833728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.833863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.833895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.834092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.834124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.834260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.834292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.834469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.834502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.834773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.834805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.835002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.835035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.835163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.835196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.835377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.835409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.835593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.835626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.835759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.835790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.835984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.836018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.836211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.836242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.836421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.836452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.836644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.836676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.836874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.836906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.837139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.837171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.837292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.837324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.837536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.837567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.837834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.837865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.837986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.838020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.838132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.838163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.838353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.838386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.838582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.838615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.838744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.838776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.838984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.839018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.839206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.839239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.839413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.839445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.839636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.839670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.839784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.839815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.839935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.839993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.840190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.840224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.837 [2024-11-20 09:59:36.840436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.837 [2024-11-20 09:59:36.840468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.837 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.840656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.840688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.840964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.840997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.841278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.841317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.841566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.841598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.841787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.841820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.842075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.842108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.842353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.842386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.842588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.842620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.842815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.842847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.842982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.843014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.843210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.843241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.843369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.843401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.843669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.843702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.843946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.843987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.844163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.844195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.844312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.844346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.844526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.844557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.844740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.844773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.844946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.844986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.845265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.845298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.845480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.845513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.845700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.845733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.845919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.845964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.846090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.846123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.846302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.846334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.846548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.846579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.846766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.846799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.847041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.847076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.847269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.847301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.847490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.847522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.847636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.847669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.847798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.847829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.848028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.848061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.848247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.848279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.848473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.848504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.838 [2024-11-20 09:59:36.848650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.838 [2024-11-20 09:59:36.848683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.838 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.848886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.848918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.849118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.849152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.849286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.849317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.849573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.849605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.849857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.849890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.850076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.850110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.850219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.850256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.850428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.850458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.850593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.850627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.850816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.850849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.851046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.851079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.851210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.851241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.851430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.851463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.851612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.851644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.851825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.851857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.851986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.852019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.852126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.852157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.852330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.852362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.852560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.852593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.852765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.852798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.853002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.853036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.853291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.853323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.853562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.853593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.853802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.853834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.854011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.854044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.854229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.854261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.854402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.854434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.854730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.854762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.854959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.854993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.855247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.855279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.855414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.855446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.855656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.855689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.855796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.855828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.839 [2024-11-20 09:59:36.856005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.839 [2024-11-20 09:59:36.856039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.839 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.856327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.856360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.856487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.856519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.856650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.856683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.856810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.856841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.856960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.856995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.857259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.857291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.857480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.857514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.857708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.857740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.857935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.857994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.858189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.858221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.858328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.858360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.858604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.858638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.858818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.858855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.859159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.859193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.859412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.859445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.859566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.859598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.859865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.859897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.860097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.860131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.860261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.860295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.860484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.860516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.860650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.860681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.860923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.860968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.861239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.861272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.861538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.861572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.861771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.861804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.862011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.862045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.862316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.862349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.862523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.862556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.862761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.862793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.862922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.862960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.863206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.863237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.863382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.863413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.863596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.863628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.863814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.863845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.864029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.864063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.864275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.864307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.864431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.840 [2024-11-20 09:59:36.864463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.840 qpair failed and we were unable to recover it. 00:27:13.840 [2024-11-20 09:59:36.864704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.864736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.864919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.864957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.865165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.865197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.865322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.865355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.865642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.865674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.865992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.866027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.866249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.866281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.866466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.866498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.866706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.866739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.866916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.866959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.867150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.867183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.867291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.867322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.867515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.867548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.867736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.867767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.867959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.867994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.868246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.868283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.868543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.868576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.868775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.868807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.868945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.868989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.869203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.869235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.869446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.869478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.869773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.869804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.869985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.870018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.870139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.870172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.870372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.870405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.870666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.870698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.870891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.870924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.871109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.871142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.871328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.871360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.871563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.871595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.871837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.871870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.872102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.872134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.872247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.872280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.872403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.872435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.872606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.872638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.872891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.872923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.873115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.841 [2024-11-20 09:59:36.873147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.841 qpair failed and we were unable to recover it. 00:27:13.841 [2024-11-20 09:59:36.873286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.873318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.873434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.873465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.873643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.873674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.873820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.873858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.874097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.874130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.874317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.874350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.874588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.874621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.874793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.874824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.875049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.875083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.875365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.875397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.875550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.875581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.875765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.875797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.875975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.876009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.876132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.876164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.876339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.876390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.876567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.876599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.876777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.876810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.877006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.877039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.877212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.877249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.877431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.877462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.877578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.877611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.877736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.877767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.877970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.878003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.878147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.878179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.878416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.878448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.878643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.878674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.878806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.878838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.879017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.879049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.879252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.879284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.879461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.879493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.879615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.879648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.879785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.879818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.879963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.879997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.880191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.880221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.880406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.880438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.880570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.880602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.880841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.880873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.842 [2024-11-20 09:59:36.881063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.842 [2024-11-20 09:59:36.881096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.842 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.881228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.881259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.881391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.881423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.881535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.881567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.881752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.881784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.881995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.882030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.882137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.882169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.882412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.882445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.882627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.882659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.882777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.882810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.882941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.882983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.883111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.883142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.883326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.883358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.883540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.883573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.883749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.883780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.883966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.883999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.884103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.884136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.884257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.884290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.884472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.884503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.884624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.884656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.884766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.884796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.884974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.885012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.885128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.885159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.885277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.885310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.885526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.885559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.885768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.885801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.886064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.886096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.886346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.886379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.886552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.886583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.843 [2024-11-20 09:59:36.886843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.843 [2024-11-20 09:59:36.886874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.843 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.886989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.887031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.887308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.887341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.887549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.887580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.887720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.887751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.887963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.887997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.888130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.888163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.888349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.888380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.888509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.888541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.888803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.888835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.888961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.888995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.889100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.889131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.889369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.889401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.889512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.889545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.889752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.889784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.889908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.889941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.890069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.890102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.890289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.890321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.890438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.890471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.890606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.890650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.890825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.890857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.890988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.891022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.891263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.891296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.891409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.891442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.891617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.891649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.891828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.891860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.891977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.892018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.892141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.892173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.892356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.892389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.892582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.892615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.892856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.892889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.893070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.893103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.893208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.893240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.893447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.893480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.893665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.893696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.893957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.893991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.894172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.844 [2024-11-20 09:59:36.894204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.844 qpair failed and we were unable to recover it. 00:27:13.844 [2024-11-20 09:59:36.894378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.894409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.894520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.894552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.894724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.894755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.894991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.895025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.895216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.895248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.895429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.895462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.895583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.895623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.895748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.895780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.895974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.896007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.896262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.896294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.896556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.896588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.896699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.896732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.896850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.896881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.897080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.897113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.897301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.897333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.897525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.897557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.897674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.897707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.897820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.897851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.898028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.898061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.898242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.898275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.898477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.898509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.898626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.898657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.898772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.898810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.898935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.898979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.899222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.899255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.899388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.899419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.899593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.899625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.899776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.899807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.899992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.900027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.900177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.900209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.900320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.900352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.900526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.900558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.900663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.900694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.900871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.900903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.901023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.901056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.901261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.901293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.901500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.901532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.901676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 09:59:36.901708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.845 qpair failed and we were unable to recover it. 00:27:13.845 [2024-11-20 09:59:36.901880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.901912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.902036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.902067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.902176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.902207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.902316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.902347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.902451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.902484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.902659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.902690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.902793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.902824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.902983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.903017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.903153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.903185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.903383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.903415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.903519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.903552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.903821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.903854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.904089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.904122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.904410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.904443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.904711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.904742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.904856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.904888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.905090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.905123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.905226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.905257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.905443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.905474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.905656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.905689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.905862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.905894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.906092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.906125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.906395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.906426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.906533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.906565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.906692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.906729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.906843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.906874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.907087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.907120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.907305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.907336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.907457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.907488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.907681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.907714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.907851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.907883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.908052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.908085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.908215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.908247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.908424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.908458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.908674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.908706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.908880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.908912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.909081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.909113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.909225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.846 [2024-11-20 09:59:36.909257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.846 qpair failed and we were unable to recover it. 00:27:13.846 [2024-11-20 09:59:36.909447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.909479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.909665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.909698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.909869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.909900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.910087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.910120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.910297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.910328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.910457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.910489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.910658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.910690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.910962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.910995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.911118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.911149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.911326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.911358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.911459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.911489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.911684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.911717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.911822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.911853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.912033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.912067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.912256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.912289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.912469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.912501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.912701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.912732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.912995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.913030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.913223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.913253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.913436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.913467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.913740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.913772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.913904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.913936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.914078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.914111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.914374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.914406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.914582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.914616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.914804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.914835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.915017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.915055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.915181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.915213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.915383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.915415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.915599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.915630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.915780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.915812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.915938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.915982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.916174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.916205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.916386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.916417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.916536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.916568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.916718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.916749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.916868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.916910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.847 [2024-11-20 09:59:36.917106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.847 [2024-11-20 09:59:36.917139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.847 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.917343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.917375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.917497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.917529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.917818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.917851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.918061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.918095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.918222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.918253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.918376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.918408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.918620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.918652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.918760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.918792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.918989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.919022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.919198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.919229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.919356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.919387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.919556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.919587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.919837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.919870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.919994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.920028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.920274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.920306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.920495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.920528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.920700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.920733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.920844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.920875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.921000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.921033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.921214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.921248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.921430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.921461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.921714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.921747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.921865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.921897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.922032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.922065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.922187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.922219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.922342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.922374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.922551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.922583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.922764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.922796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.848 [2024-11-20 09:59:36.922977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.848 [2024-11-20 09:59:36.923017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.848 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.923260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.923292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.923420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.923452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.923635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.923667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.923838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.923870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.924039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.924073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.924272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.924304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.924498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.924530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.924719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.924750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.924929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.924978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.925161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.925193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.925375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.925407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.925536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.925567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.925742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.925773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.926034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.926077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.926319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.926350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.926543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.926574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.926753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.926787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.926981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.927014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.927210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.927241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.927430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.927462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.927650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.927681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.927859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.927891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.928079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.928113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.928254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.928287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.928467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.928499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.928625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.928656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.928932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.928970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.929155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.929187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.929367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.929398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.929584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.929616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.929746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.929777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.930018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.930050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.930160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.930191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.930373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.930405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.930514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.930544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.849 qpair failed and we were unable to recover it. 00:27:13.849 [2024-11-20 09:59:36.930802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.849 [2024-11-20 09:59:36.930833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.931012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.931046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.931223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.931255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.931453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.931484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.931598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.931641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.931820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.931851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.932006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.932040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.932146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.932178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.932351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.932382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.932504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.932535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.932639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.932670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.932792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.932823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.932936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.932978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.933181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.933213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.933314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.933345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.933465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.933497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.933736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.933768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.933877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.933909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.934144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.934177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.934302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.934333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.934587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.934618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.934737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.934769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.934976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.935010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.935149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.935179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.935371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.935403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.935581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.935613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.935742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.935774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.935986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.936020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.936192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.936223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.936416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.936448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.936626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.936657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.936900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.936934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.937080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.937112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.937216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.937247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.937484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.937515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.937762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.937793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.937984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.938016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.938136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.850 [2024-11-20 09:59:36.938168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.850 qpair failed and we were unable to recover it. 00:27:13.850 [2024-11-20 09:59:36.938353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.938385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.938661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.938694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.938820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.938850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.939024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.939058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.939178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.939210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.939422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.939454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.939665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.939702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.939804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.939836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.940078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.940110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.940237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.940268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.940466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.940498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.940676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.940708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.940891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.940923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.941066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.941099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.941285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.941316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.941508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.941539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.941731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.941763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.941938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.941977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.942164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.942197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.942315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.942345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.942480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.942511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.942687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.942720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.942844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.942875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.943048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.943081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.943201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.943232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.943430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.943462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.943570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.943602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.943718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.943750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.943876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.943906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.944126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.944158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.944269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.944300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.944499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.944531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.944793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.944823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.945092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.945126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.945313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.945344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.945471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.945502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.851 [2024-11-20 09:59:36.945687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.851 [2024-11-20 09:59:36.945718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.851 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.945908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.945939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.946126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.946158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.946270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.946302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.946573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.946603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.946863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.946895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.947036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.947068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.947304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.947336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.947517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.947549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.947666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.947698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.947887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.947924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.948119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.948151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.948322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.948353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.948540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.948572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.948754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.948785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.948901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.948932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.949224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.949256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.949393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.949425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.949597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.949628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.949745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.949776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.949905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.949937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.950229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.950260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.950368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.950400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.950669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.950701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.950818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.950850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.950990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.951024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.951241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.951272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.951464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.951496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.951684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.951716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.951828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.951860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.852 qpair failed and we were unable to recover it. 00:27:13.852 [2024-11-20 09:59:36.952008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.852 [2024-11-20 09:59:36.952041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.952229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.952261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.952410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.952441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.952614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.952647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.952751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.952782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.952992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.953025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.953224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.953256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.953436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e9af0 is same with the state(6) to be set 00:27:13.853 [2024-11-20 09:59:36.953723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.953794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.954031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.954070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.954267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.954299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.954419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.954452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.954631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.954663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.954918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.954964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.955151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.955182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.955424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.955456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.955575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.955607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.955847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.955879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.956006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.956038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.956179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.956210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.956326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.956357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.956570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.956601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.956720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.956751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.956854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.956885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.957007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.957038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.957167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.957199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.957377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.957410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.957589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.957619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.957802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.957832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.958067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.958099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.958286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.958319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.958433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.958463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.853 [2024-11-20 09:59:36.958646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.853 [2024-11-20 09:59:36.958678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.853 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.958912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.958944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.959166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.959212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.959323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.959354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.959463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.959494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.959619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.959651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.959761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.959791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.959893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.959923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.960121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.960153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.960361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.960392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.960580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.960612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.960732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.960763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.960905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.960935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.961183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.961215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.961320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.961352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.961525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.961557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.961682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.961715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.961838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.961870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.962041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.962074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.962253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.962286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.962521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.962553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.962666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.962698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.962823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.962855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.963108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.963142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.963278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.963309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.963424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.963455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.963683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.963714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.963834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.963865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.964056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.964088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.964296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.964328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.964513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.964544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.964753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.964784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.854 [2024-11-20 09:59:36.964892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.854 [2024-11-20 09:59:36.964923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.854 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.965063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.965095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.965207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.965239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.965421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.965451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.965574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.965606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.965793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.965823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.966079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.966112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.966306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.966337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.966459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.966490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.966661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.966692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.966795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.966832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.967056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.967088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.967213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.967244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.967364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.967396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.967530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.967561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.967748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.967778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.967886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.967918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.968104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.968137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.968319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.968350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.968480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.968512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.968699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.968730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.968915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.968955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.969065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.969098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.969230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.969261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.969389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.969421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.969603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.969635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.969810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.969843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.969973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.970007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.970250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.970282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.970460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.970492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.970679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.970710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.970893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.970923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.971042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.971074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.971209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.971242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.971423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.855 [2024-11-20 09:59:36.971453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.855 qpair failed and we were unable to recover it. 00:27:13.855 [2024-11-20 09:59:36.971571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.971602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.971802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.971834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.972024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.972057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.972262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.972293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.972431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.972464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.972586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.972618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.972790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.972821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.973031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.973063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.973202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.973235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.973359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.973389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.973493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.973525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.973696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.973727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.973854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.973885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.974057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.974090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.974328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.974361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.974484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.974520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.974637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.974669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.974912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.974943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.975130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.975162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.975334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.975365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.975567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.975598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.975694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.975726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.975855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.975886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.976059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.976092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.976221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.976253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.976368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.976400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.976587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.976618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.976857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.976890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.977003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.977038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.977154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.977185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.977292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.977322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.977432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.977464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.977568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.977598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.977803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.977835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.977964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.856 [2024-11-20 09:59:36.977997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.856 qpair failed and we were unable to recover it. 00:27:13.856 [2024-11-20 09:59:36.978182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.978213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.978459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.978489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.978725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.978756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.978930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.978968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.979071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.979103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.979344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.979374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.979496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.979527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.979721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.979757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.979878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.979909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.980093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.980128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.980239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.980271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.980404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.980436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.980624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.980657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.980789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.980821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.980931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.981005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.981128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.981161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.981273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.981304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.981419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.981451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.981688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.981720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.981828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.981861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.982125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.982166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.982293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.982325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.982447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.982479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.982732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.982762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.982884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.982916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.983051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.983083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.983254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.983286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.983516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.983547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.983743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.983774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.983975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.984006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.984289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.984320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.857 qpair failed and we were unable to recover it. 00:27:13.857 [2024-11-20 09:59:36.984506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.857 [2024-11-20 09:59:36.984537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.984724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.984756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.985021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.985054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.985229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.985261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.985368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.985400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.985669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.985699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.985889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.985919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.986061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.986093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.986287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.986508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.986541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.986791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.986821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.987016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.987049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.987236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.987268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.987374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.987406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.987519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.987551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.987814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.987847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.988124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.988157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.988297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.988330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.988500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.988533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.988646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.988677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.988794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.988824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.988996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.989029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.989210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.989242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.989370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.989402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.989605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.989636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.989749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.989780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.989995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.990027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.990224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.990255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.990489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.990520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.990629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.990666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.990800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.990831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.990969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.991002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.991113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.858 [2024-11-20 09:59:36.991144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.858 qpair failed and we were unable to recover it. 00:27:13.858 [2024-11-20 09:59:36.991262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.991293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.991464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.991495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.991731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.991762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.991970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.992003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.992177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.992209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.992442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.992471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.992674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.992705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.992805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.992836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.993024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.993056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.993163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.993195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.993419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.993450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.993685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.993716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.993973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.994006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.994188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.994219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.994398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.994429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.994550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.994580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.994769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.994800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.994997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.995029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.995211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.995242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.995478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.995509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.995689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.995720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.995944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.995986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.996107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.996140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.996314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.996345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.996543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.996574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.996690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.996722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.996922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.996964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.997203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.997234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.997434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.997466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.859 [2024-11-20 09:59:36.997633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.859 [2024-11-20 09:59:36.997663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.859 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:36.997933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:36.997973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:36.998212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:36.998244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:36.998415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:36.998445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:36.998679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:36.998709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:36.998900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:36.998932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:36.999124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:36.999155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:36.999340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:36.999382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:36.999511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:36.999542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:36.999804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:36.999835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.000076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.000109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.000224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.000255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.000424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.000456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.000629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.000660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.000773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.000804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.000999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.001032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.001200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.001231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.001466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.001498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.001769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.001800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.001974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.002007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.002198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.002228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.002481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.002512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.002615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.002647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.002821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.002851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.003037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.003070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.003256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.003287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.003472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.003503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.003622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.003654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.003885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.003915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.004053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.004086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.004277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.004308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.004510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.004541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.004675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.004707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.004969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.005002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.005134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.005166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.005334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.005364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.860 [2024-11-20 09:59:37.005545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.860 [2024-11-20 09:59:37.005575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.860 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.005760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.005791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.005907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.005938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.006226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.006257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.006424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.006456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.006586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.006617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.006736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.006768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.006974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.007006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.007110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.007142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.007308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.007340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.007469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.007500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.007665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.007702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.007939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.007981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.008167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.008198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.008400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.008431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.008618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.008649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.008891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.008922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.009051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.009082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.009214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.009245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.009478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.009508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.009622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.009652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.009831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.009863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.010046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.010078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.010294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.010325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.010587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.010619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.010814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.010846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.010968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.011001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.011173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.011204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.011384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.011414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.011599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.011629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.011799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.011830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.012092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.012124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.012305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.012337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.012507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.012538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.012718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.012748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.012932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.012971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.013106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.013137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.861 [2024-11-20 09:59:37.013254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.861 [2024-11-20 09:59:37.013285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.861 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.013519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.013552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.013678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.013708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.013821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.013852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.013989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.014021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.014192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.014223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.014462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.014494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.014729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.014759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.015018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.015050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.015167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.015197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.015322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.015353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.015521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.015551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.015730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.015760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.015971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.016003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.016264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.016302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.016417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.016447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.016563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.016594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.016782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.016813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.017072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.017104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.017280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.017312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.017547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.017578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.017700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.017731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.017916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.017958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.018142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.018174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.018432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.018463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.018629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.018661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.018870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.018902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.019033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.019065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.019237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.019270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.019453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.019484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.019601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.019631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.019760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.019791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.020002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.020036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.020275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.020306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.020495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.020527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.020647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.020679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.020806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.020837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.020970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.021002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.862 qpair failed and we were unable to recover it. 00:27:13.862 [2024-11-20 09:59:37.021182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.862 [2024-11-20 09:59:37.021215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.021433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.021464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.021640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.021671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.021804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.021836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.021936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.021988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.022106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.022137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.022394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.022426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.022634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.022664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.022830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.022862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.023114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.023147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.023347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.023379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.023564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.023596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.023779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.023810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.023978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.024011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.024125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.024157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.024438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.024469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.024721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.024757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.024872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.024903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.025098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.025130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.025249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.025281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.025518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.025550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.025677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.025709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.025881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.025911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.026097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.026131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.026332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.026363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.026625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.026657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.026843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.026874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.027121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.027153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.027341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.027373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.027542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.027574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.027747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.027778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.863 [2024-11-20 09:59:37.027962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.863 [2024-11-20 09:59:37.027995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.863 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.028177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.028209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.028324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.028355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.028468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.028500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.028692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.028725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.028932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.028973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.029161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.029193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.029373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.029405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.029606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.029637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.029920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.029975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.030238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.030270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.030441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.030472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.030668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.030700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.030935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.030979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.031265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.031297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.031558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.031590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.031704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.031735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.031904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.031935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.032160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.032192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.032416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.032447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.032577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.032608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.032775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.032807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.033043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.033076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.033264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.033296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.033463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.033495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.033692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.033730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.033919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.033957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.034163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.034195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.034326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.034357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.034568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.034600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.034835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.034867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.035048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.035081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.035254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.035285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.035477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.035509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.035615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.035646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.035913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.035944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.036069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.036101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.036362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.036394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.864 qpair failed and we were unable to recover it. 00:27:13.864 [2024-11-20 09:59:37.036586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.864 [2024-11-20 09:59:37.036617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.036817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.036847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.037048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.037082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.037253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.037284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.037481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.037512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.037796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.037828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.037991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.038025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.038151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.038183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.038354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.038385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.038565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.038597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.038783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.038815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.038983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.039015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.039206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.039238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.039360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.039391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.039631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.039703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.039835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.039871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.040111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.040145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.040264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.040297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.040533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.040562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.040730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.040762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.041018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.041052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.041223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.041254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.041441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.041471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.041640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.041671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.041911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.041942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.042145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.042176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.042356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.042389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.042570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.042600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.042779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.042810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.043060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.043093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.043357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.043388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.043511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.043543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.043664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.043695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.043929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.043970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.044158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.044189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.044474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.044504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.044752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.044783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.044969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.045002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.865 [2024-11-20 09:59:37.045202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.865 [2024-11-20 09:59:37.045233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.865 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.045437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.045467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.045637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.045669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.045797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.045834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.046100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.046134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.046245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.046277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.046558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.046589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.046774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.046806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.046934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.046974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.047144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.047175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.047442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.047473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.047686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.047717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.047900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.047930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.048056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.048089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.048257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.048288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.048423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.048454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.048640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.048671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.048847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.048879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.048991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.049024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.049213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.049245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.049415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.049445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.049726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.049757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.049886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.049917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.050119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.050151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.050262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.050292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.050473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.050503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.050618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.050648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.050820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.050850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.051033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.051066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.051190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.051221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.051400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.051431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.051612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.051644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.051744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.051773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.051958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.051991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.052195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.052227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.052482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.052514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.866 qpair failed and we were unable to recover it. 00:27:13.866 [2024-11-20 09:59:37.052632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.866 [2024-11-20 09:59:37.052663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.052897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.052928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.053198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.053231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.053437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.053468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.053637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.053667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.053865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.053896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.054047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.054079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.054245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.054277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.054381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.054419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.054528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.054559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.054814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.054845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.055037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.055071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.055305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.055337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.055540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.055572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.055755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.055786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.055964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.055997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.056202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.056233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.056411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.056440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.056726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.056758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.056941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.056981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.057243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.057275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.057405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.057436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.057676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.057707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.057904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.057934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.058191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.058223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.058334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.058365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.058477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.058506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.058633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.058664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.058868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.058900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.059093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.059125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.059386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.059417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.059594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.059625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.059723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.059753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.060000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.060033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.060160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.060192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.867 [2024-11-20 09:59:37.060310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.867 [2024-11-20 09:59:37.060346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.867 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.060463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.060493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.060695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.060725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.060905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.060936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.061079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.061111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.061292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.061323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.061429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.061460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.061583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.061615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.061781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.061813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.061944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.061985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.062162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.062193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.062447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.062477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.062645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.062676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.062874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.062906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.063096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.063129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.063310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.063341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.063530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.063560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.063692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.063722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.063911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.063942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.064196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.064228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.064333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.064363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.064488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.064519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.064634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.064665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.064805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.064836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.065003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.065036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.065225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.065256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.065492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.065522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.065636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.065667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.065856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.065887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.066086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.066120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.066306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.066337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.066453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.066484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.066750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.066782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.066906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.868 [2024-11-20 09:59:37.066937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.868 qpair failed and we were unable to recover it. 00:27:13.868 [2024-11-20 09:59:37.067116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.067147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.067339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.067370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.067577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.067608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.067794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.067825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.067942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.067981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.068158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.068189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.068386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.068417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.068529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.068566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.068738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.068769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.068973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.069005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.069259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.069289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.069524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.069555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.069805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.069835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.070020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.070053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.070234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.070265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.070451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.070482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.070716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.070747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.070931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.070972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.071252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.071282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.071460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.071490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.071600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.071631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.071835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.071866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.071973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.072005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.072240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.072271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.072459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.072489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.072679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.072711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.072830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.072860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.073058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.073090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.073208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.073239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.073499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.073531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.073648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.073679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.073869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.073900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.074030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.074062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.074265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.074295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.074492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.074528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.074704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.074736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.074862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.074892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.075085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.075118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.869 qpair failed and we were unable to recover it. 00:27:13.869 [2024-11-20 09:59:37.075308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.869 [2024-11-20 09:59:37.075340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.075458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.075487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.075607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.075638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.075837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.075868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.076041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.076073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.076243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.076274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.076375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.076406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.076586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.076616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.076781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.076811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.076987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.077020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.077217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.077248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.077504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.077535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.077647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.077678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.077797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.077828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.077996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.078029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.078225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.078256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.078516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.078547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.078742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.078774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.079013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.079046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.079266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.079298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.079498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.079530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.079703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.079734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.079862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.079894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.080189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.080221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.080514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.080546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.080782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.080814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.080993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.081025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.081196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.081229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.081363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.081394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.081522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.081553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.081752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.081783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.081977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.082010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.082136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.082167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.082269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.082300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.082562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.082593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.082839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.082871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.082989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.083023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.083286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.083329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.870 [2024-11-20 09:59:37.083455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.870 [2024-11-20 09:59:37.083487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.870 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.083720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.083751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.083945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.083987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.084246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.084278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.084486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.084518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.084730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.084762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.084968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.085002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.085174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.085205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.085487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.085518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.085779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.085810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.086092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.086125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.086318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.086350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.086467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.086498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.086757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.086788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.087045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.087076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.087253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.087284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.087453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.087483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.087738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.087769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.087960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.087993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.088196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.088227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.088397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.088426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.088690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.088720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.088861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.088891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.089070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.089103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.089292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.089323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.089560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.089590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.089711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.089748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.089928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.089971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.090182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.090213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.090389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.090419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.090588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.090619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.090787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.090818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.091073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.091105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.091209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.091240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.091476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.091507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.091691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.091722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.091891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.091921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.092051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.092082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.092259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.871 [2024-11-20 09:59:37.092291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.871 qpair failed and we were unable to recover it. 00:27:13.871 [2024-11-20 09:59:37.092490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.092521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.092719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.092750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.092926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.092964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.093202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.093233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.093407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.093439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.093679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.093711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.093903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.093934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.094135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.094167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.094353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.094384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.094569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.094600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.094872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.094904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.095036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.095069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.095250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.095282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.095472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.095502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.095622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.095653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.095781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.095812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.095943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.095985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.096234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.096266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.096458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.096488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.096689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.096719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.096892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.096923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.097032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.097064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.097322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.097352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.097540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.097571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.097804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.097835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.098028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.098060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.098246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.098277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.098514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.098544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.098676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.098712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.098880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.098911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.099097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.099129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.099315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.099346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.099584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.099616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.872 qpair failed and we were unable to recover it. 00:27:13.872 [2024-11-20 09:59:37.099801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.872 [2024-11-20 09:59:37.099832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.100031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.100063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.100252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.100283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.100391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.100421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.100598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.100630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.100819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.100850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.101031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.101062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.101183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.101214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.101419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.101449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.101626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.101657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.101784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.101814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.102057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.102089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.102334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.102364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.102571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.102601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.102807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.102839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.103076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.103109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.103288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.103319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.103574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.103606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.103720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.103752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.103981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.104014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.104180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.104211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.104416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.104448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.104635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.104666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.104858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.104889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.105089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.105121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.105232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.105263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.105522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.105553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.105810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.105840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.105979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.106013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.106197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.106229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.106409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.106438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.106642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.106674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.106864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.106895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.107014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.107046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.107229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.107261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.107446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.107477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.107712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.107783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.108070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.108108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.873 [2024-11-20 09:59:37.108228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.873 [2024-11-20 09:59:37.108261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.873 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.108388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.108420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.108593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.108625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.108830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.108863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.109053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.109086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.109274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.109305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.109488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.109519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.109780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.109811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.110008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.110040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.110286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.110318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.110522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.110554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.110807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.110848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.111094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.111126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.111332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.111363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.111548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.111579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.111760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.111791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.111982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.112016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.112272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.112304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.112505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.112536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.112774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.112807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.112924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.112965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.113090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.113121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.113251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.113282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.113470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.113502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.113684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.113715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.113836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.113868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.114037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.114070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.114180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.114211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.114329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.114360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.114537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.114569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.114738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.114769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.114884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.114916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.115101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.115134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.115307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.115338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.115451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.115482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.115671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.115703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.115887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.115919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.116118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.116154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.116354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.116385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.116599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.116630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.116815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.116846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.117117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.874 [2024-11-20 09:59:37.117149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.874 qpair failed and we were unable to recover it. 00:27:13.874 [2024-11-20 09:59:37.117337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.117368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.117537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.117567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.117688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.117718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.117912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.117943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.118139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.118169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.118444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.118476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.118592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.118622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.118747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.118778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.119013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.119046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.119228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.119259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.119526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.119558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.119821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.119852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.119976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.120008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.120205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.120237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.120412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.120442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.120571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.120603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.120732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.120764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.120937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.120979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.121165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.121196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.121391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.121422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.121683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.121714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.121970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.122003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.122178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.122209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.122479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.122516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.122660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.122690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.122866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.122898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.123019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.123053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.123186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.123217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.123430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.123461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.123720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.123751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.123942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.123984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.124167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.124199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.124369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.124399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.124641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.124672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.124853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.124884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.124995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.125027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.125231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.125263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.125454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.125486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.125673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.125704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.125985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.126019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.126146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.126177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.875 [2024-11-20 09:59:37.126348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.875 [2024-11-20 09:59:37.126379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.875 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.126573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.126606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.126712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.126743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.126920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.126959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.127090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.127121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.127288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.127319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.127489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.127520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.127703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.127734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.127848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.127880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.128073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.128105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.128349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.128380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.128567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.128598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.128734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.128766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.128968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.129000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.129169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.129200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.129378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.129408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.129608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.129638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.129768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.129799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.129935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.129984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.130168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.130199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.130388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.130417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.130546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.130577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.130750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.130781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.130902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.130938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.131153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.131185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.131419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.131449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.131644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.131674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.131917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.131958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.132232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.132263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.132452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.132483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.132603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.132634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.132870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.132901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.133025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.133057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.133238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.133269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.133399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.133430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.133640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.133670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.133849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.133880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.134156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.134190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.134309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.134341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.134520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.134550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.134681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.876 [2024-11-20 09:59:37.134711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:13.876 qpair failed and we were unable to recover it. 00:27:13.876 [2024-11-20 09:59:37.134814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.134844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.135080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.135113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.135378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.135411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.135531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.135562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.135749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.135780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.135945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.135984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.136185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.136216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.136405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.136436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.136679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.136710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.136841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.136877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.137001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.137034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.137270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.137300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.137408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.137439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.137679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.166 [2024-11-20 09:59:37.137711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.166 qpair failed and we were unable to recover it. 00:27:14.166 [2024-11-20 09:59:37.137898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.137929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.138121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.138153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.138251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.138281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.138450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.138481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.138657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.138688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.138813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.138843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.138969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.139001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.139114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.139146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.139317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.139347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.139544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.139576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.139855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.139886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.140071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.140103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.140221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.140251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.140380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.140410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.140647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.140677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.140863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.140895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.141119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.141152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.141331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.141362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.141531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.141561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.141740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.141771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.141886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.141917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.142127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.142160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.142395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.142426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.142551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.142582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.142768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.142800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.143109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.143142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.143264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.143295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.143468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.143499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.143682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.143712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.143890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.143921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.144116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.144147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.144316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.144346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.144515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.144546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.144727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.144759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.145004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.145036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.145231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.145263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.145432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.145469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.145644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.145675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.167 qpair failed and we were unable to recover it. 00:27:14.167 [2024-11-20 09:59:37.145862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.167 [2024-11-20 09:59:37.145893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.146085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.146117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.146245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.146276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.146403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.146433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.146566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.146597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.146784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.146815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.147009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.147042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.147155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.147187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.147356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.147387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.147574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.147605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.147776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.147807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.147988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.148021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.148214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.148246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.148478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.148508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.148749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.148778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.148898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.148929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.149051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.149083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.149185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.149216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.149452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.149483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.149740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.149770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.149882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.149912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.150044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.150080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.150262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.150293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.150480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.150510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.150696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.150726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.150895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.150932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.151149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.151180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.151284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.151315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.151578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.151610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.151735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.151765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.151876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.151907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.152100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.152132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.152389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.152420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.152537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.152568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.152816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.152848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.152976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.153008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.153243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.153274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.153444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.153475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.168 [2024-11-20 09:59:37.153654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.168 [2024-11-20 09:59:37.153685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.168 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.153823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.153854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.154033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.154065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.154184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.154214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.154338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.154369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.154550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.154581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.154826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.154857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.154965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.154998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.155170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.155201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.155306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.155337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.155520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.155551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.155668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.155698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.155883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.155914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.156180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.156213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.156476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.156506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.156616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.156647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.156842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.156872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.156996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.157029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.157213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.157244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.157431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.157463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.157602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.157633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.157805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.157837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.157975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.158008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.158231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.158262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.158453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.158484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.158720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.158752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.158924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.158963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.159174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.159205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.159390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.159427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.159686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.159716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.159839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.159870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.160041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.160073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.160191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.160223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.160346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.160377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.160493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.160525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.160639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.160671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.160773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.160803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.161003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.161036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.161204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.169 [2024-11-20 09:59:37.161235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.169 qpair failed and we were unable to recover it. 00:27:14.169 [2024-11-20 09:59:37.161356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.161387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.161499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.161531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.161789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.161819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.161995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.162029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.162209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.162240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.162500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.162530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.162717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.162748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.162928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.162968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.163225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.163255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.163378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.163408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.163593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.163625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.163910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.163941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.164134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.164165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.164348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.164380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.164554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.164586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.164724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.164755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.164897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.164928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.165195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.165228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.165361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.165393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.165595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.165625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.165741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.165772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.165894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.165927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.166136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.166167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.166299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.166330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.166455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.166485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.166603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.166634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.166815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.166845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.167051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.167084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.167204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.167234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.167443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.167474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.167658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.167691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.167933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.167972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.168238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.168270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.168439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.168470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.168578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.168609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.168790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.168821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.168968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.169000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.170 [2024-11-20 09:59:37.169180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.170 [2024-11-20 09:59:37.169211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.170 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.169323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.169354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.169525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.169556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.169774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.169806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.169978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.170012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.170195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.170227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.170363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.170393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.170574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.170605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.170723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.170755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.170877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.170908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.171059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.171093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.171352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.171385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.171556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.171588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.171713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.171745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.171966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.171999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.172234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.172266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.172504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.172535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.172654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.172685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.172868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.172901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.173145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.173178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.173350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.173388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.173625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.173656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.173834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.173866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.174042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.174075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.174246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.174277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.174410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.174442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.174652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.174683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.174936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.174977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.175156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.175188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.175306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.175337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.175468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.175500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.175672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.175704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.175813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.175844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.171 qpair failed and we were unable to recover it. 00:27:14.171 [2024-11-20 09:59:37.175970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.171 [2024-11-20 09:59:37.176003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.176200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.176233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.176421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.176452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.176629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.176660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.176839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.176872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.176987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.177021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.177288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.177320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.177559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.177590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.177804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.177835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.178073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.178107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.178234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.178265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.178456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.178488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.178659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.178690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.178867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.178898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.179083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.179115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.179297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.179328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.179448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.179478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.179656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.179687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.179882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.179913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.180039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.180071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.180238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.180268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.180395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.180425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.180605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.180637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.180898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.180932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.181112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.181143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.181279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.181311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.181481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.181512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.181683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.181716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.181996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.182030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.182271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.182304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.182429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.182461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.182564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.182595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.182776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.182809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.182926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.182968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.183204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.183236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.183431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.183462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.183703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.183734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.183914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.172 [2024-11-20 09:59:37.183945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.172 qpair failed and we were unable to recover it. 00:27:14.172 [2024-11-20 09:59:37.184074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.184106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.184232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.184264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.184466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.184497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.184759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.184790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.184968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.185002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.185129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.185159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.185344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.185376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.185501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.185533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.185644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.185674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.185791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.185823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.186014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.186048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.186236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.186268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.186377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.186408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.186531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.186562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.186676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.186707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.186983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.187015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.187262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.187293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.187507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.187544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.187680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.187710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.187879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.187910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.188027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.188060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.188192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.188224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.188405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.188436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.188543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.188574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.188741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.188773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.188956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.188988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.189225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.189256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.189451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.189482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.189652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.189682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.189879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.189910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.190228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.190261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.190436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.190468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.190761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.190791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.190913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.190943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.191224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.191257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.191436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.191466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.191580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.191609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.191781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.173 [2024-11-20 09:59:37.191813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.173 qpair failed and we were unable to recover it. 00:27:14.173 [2024-11-20 09:59:37.192046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.192078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.192212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.192243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.192492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.192523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.192692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.192724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.193009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.193041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.193219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.193250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.193437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.193468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.193700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.193732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.194005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.194038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.194211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.194243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.194425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.194456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.194666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.194698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.194816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.194846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.194970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.195002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.195173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.195205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.195393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.195425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.195601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.195631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.195815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.195846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.196049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.196082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.196333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.196363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.196542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.196577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.196761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.196793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.196971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.197002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.197186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.197218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.197395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.197427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.197544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.197575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.197760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.197791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.197925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.197987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.198108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.198140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.198276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.198308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.198411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.198442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.198565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.198596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.198796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.198828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.199099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.199131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.199318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.199350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.199544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.199575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.199753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.199784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.199884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.174 [2024-11-20 09:59:37.199913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.174 qpair failed and we were unable to recover it. 00:27:14.174 [2024-11-20 09:59:37.200126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.200158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.200402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.200432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.200623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.200654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.200890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.200921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.201132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.201164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.201268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.201299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.201541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.201572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.201751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.201782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.201969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.202002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.202183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.202220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.202481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.202512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.202643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.202674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.202964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.202997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.203172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.203203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.203375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.203407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.203603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.203634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.203763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.203794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.203977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.204009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.204202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.204233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.204439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.204470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.204706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.204736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.204851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.204881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.205086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.205118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.205225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.205257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.205446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.205477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.205714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.205747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.205875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.205906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.206107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.206141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.206376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.206408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.206527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.206558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.206732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.206763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.206876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.206907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.207097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.207131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.207241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.175 [2024-11-20 09:59:37.207271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.175 qpair failed and we were unable to recover it. 00:27:14.175 [2024-11-20 09:59:37.207411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.207442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.207616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.207646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.207756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.207788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.207975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.208008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.208128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.208158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.208337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.208368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.208550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.208581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.208830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.208860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.208967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.209000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.209111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.209140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.209258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.209288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.209469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.209502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.209694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.209726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.209901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.209931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.210062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.210095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.210288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.210320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.210442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.210479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.210672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.210703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.210962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.210994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.211165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.211196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.211455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.211487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.211679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.211710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.211897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.211929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.212128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.212161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.212281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.212312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.212494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.212525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.212696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.212729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.212923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.212963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.213085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.213117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.213293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.213322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.213507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.213538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.213715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.213746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.213876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.213907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.214029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.214062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.214180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.214212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.214390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.214421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.214539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.214572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.214744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.214775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.176 qpair failed and we were unable to recover it. 00:27:14.176 [2024-11-20 09:59:37.215015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.176 [2024-11-20 09:59:37.215048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.215235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.215265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.215506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.215537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.215773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.215806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.215922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.215960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.216197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.216234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.216436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.216467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.216673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.216705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.216899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.216930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.217057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.217088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.217199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.217230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.217414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.217445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.217691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.217723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.217934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.217974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.218240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.218272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.218393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.218425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.218661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.218692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.218897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.218929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.219126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.219158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.219330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.219403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.219623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.219659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.219837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.219869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.220047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.220081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.220203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.220235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.220421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.220452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.220640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.220671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.220788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.220820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.221027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.221061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.221259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.221291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.221465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.221495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.221661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.221693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.177 [2024-11-20 09:59:37.221962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.177 [2024-11-20 09:59:37.221997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.177 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.222173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.222223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.222424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.222454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.222624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.222655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.222893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.222923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.223187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.223219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.223400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.223431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.223639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.223670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.223878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.223909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.224109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.224141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.224415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.224446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.224699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.224730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.224900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.224931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.225076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.225107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.225236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.225267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.225480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.225511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.225689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.225721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.225897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.225928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.226117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.226149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.226269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.226300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.226483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.226514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.226698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.226729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.226847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.226877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.227067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.227100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.227230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.227260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.227497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.227528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.227715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.227745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.227865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.227895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.228150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.228184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.228305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.228336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.228519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.228550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.228819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.228850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.229064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.229097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.229281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.229313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.229423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.229455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.229662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.229693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.178 [2024-11-20 09:59:37.229833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.178 [2024-11-20 09:59:37.229863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.178 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.229975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.230008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.230243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.230274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.230457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.230488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.230696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.230726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.230823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.230860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.231198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.231322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.231355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.231485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.231515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.231688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.231720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.231843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.231873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.232047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.232080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.232317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.232348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.232536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.232568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.232754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.232784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.233022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.233055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.233256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.233287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.233410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.233442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.233571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.233603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.233800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.233830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.234013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.234046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.234160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.234191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.234375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.234407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.234591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.234622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.234741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.234773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.234895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.234925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.235111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.235142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.235259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.235291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.235495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.235525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.235625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.235851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.235881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.236006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.236038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.236381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.236453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.236594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.236631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.236822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.236855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.237108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.237142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.237332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.237365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.237477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.179 [2024-11-20 09:59:37.237507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.179 qpair failed and we were unable to recover it. 00:27:14.179 [2024-11-20 09:59:37.237622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.237655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.237842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.237874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.237994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.238042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.238169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.238200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.238322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.238355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.238590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.238622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.238791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.238823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.238941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.238993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.239116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.239148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.239408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.239440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.239632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.239664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.239802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.239835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.240003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.240036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.240322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.240354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.240568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.240600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.240867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.240899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.241028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.241061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.241267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.241300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.241428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.241460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.241698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.241730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.241900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.241932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.242083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.242118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.242246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.242277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.242391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.242424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.242599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.242630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.242872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.242905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.243052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.243085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.243266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.243298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.243424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.243456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.243718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.243748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.243937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.243983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.244172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.244204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.244393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.180 [2024-11-20 09:59:37.244426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.180 qpair failed and we were unable to recover it. 00:27:14.180 [2024-11-20 09:59:37.244549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.244582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.244713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.244746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.244865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.244896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.245021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.245054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.245197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.245230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.245423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.245455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.245560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.245592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.245785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.245816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.245964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.245997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.246179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.246211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.246451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.246484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.246663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.246694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.246799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.246831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.247025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.247060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.247230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.247268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.247506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.247538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.247692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.247724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.247826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.247859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.248037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.248070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.248186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.248217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.248473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.248506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.248737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.248768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.248956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.248989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.249273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.249305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.249563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.249595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.249715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.249748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.249924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.249963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.250114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.250147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.250280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.250313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.250572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.250603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.250889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.250921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.251164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.251198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.251389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.251420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.251684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.251714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.251903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.251935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.252063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.252095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.252227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.252260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.181 [2024-11-20 09:59:37.252377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.181 [2024-11-20 09:59:37.252409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.181 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.252526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.252558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.252731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.252761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.252944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.252986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.253172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.253203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.253464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.253496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.253685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.253718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.253887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.253918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.254137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.254169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.254348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.254379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.254616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.254648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.254884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.254915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.255216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.255250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.255371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.255404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.255590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.255622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.255816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.255849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.255990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.256023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.256191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.256223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.256409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.256441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.256610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.256641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.256905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.256938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.257138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.257170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.257345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.257378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.257509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.257542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.257713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.257745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.257984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.258018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.258261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.258294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.258406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.258439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.258680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.258713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.258884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.258916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.259198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.259233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.259448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.259479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.259662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.259695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.259815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.259847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.260051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.260083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.260203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.260236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.260341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.260371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.260522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.182 [2024-11-20 09:59:37.260554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.182 qpair failed and we were unable to recover it. 00:27:14.182 [2024-11-20 09:59:37.260734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.260765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.261001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.261035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.261222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.261254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.261432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.261464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.261636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.261668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.261930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.261969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.262241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.262279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.262480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.262511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.262646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.262679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.262937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.262978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.263110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.263141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.263270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.263301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.263502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.263534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.263719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.263749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.263923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.263965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.264093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.264124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.264400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.264432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.264675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.264706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.264901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.264933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.265074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.265106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.265283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.265315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.265421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.265453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.265562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.265594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.265767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.265798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.266003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.266036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.266207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.266239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.266350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.266381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.266515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.266547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.266747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.266777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.266955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.266988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.267193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.267225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.267462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.267494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.267676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.267708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.267848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.267880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.268009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.268042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.268229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.268261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.268519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.268550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.268732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.183 [2024-11-20 09:59:37.268763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.183 qpair failed and we were unable to recover it. 00:27:14.183 [2024-11-20 09:59:37.268971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.269004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.269111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.269143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.269275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.269306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.269430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.269461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.269650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.269683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.269815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.269846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.269968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.270001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.270186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.270219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.270329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.270367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.270627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.270658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.270832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.270865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.271106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.271138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.271313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.271344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.271513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.271545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.271728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.271760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.271979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.272013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.272123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.272154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.272342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.272373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.272479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.272510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.272618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.272650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.272782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.272813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.272999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.273031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.273317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.273349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.273536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.273567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.273751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.273781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.274021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.274053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.274166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.274196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.274311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.274342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.274521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.274553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.274670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.274702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.274902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.274934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.275062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.275096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.275221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.275254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.275366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.275398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.275565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.275598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.275721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.275753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.275859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.275890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.184 [2024-11-20 09:59:37.276085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.184 [2024-11-20 09:59:37.276117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.184 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.276307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.276340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.276490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.276520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.276633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.276664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.276927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.276971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.277110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.277142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.277272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.277304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.277424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.277456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.277561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.277592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.277770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.277802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.277996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.278029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.278219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.278257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.278443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.278474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.278591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.278623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.278741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.278773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.279037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.279070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.279193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.279226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.279427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.279458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.279580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.279612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.279794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.279825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.279997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.280031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.280204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.280237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.280441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.280472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.280661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.280693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.280816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.280848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.280968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.281001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.281127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.281159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.281354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.281387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.281646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.281678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.281790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.281822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.282015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.282049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.282253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.282286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.282546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.282578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.185 [2024-11-20 09:59:37.282712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.185 [2024-11-20 09:59:37.282745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.185 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.282867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.282898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.283085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.283118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.283353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.283385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.283504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.283536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.283661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.283693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.283864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.283895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.284139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.284172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.284409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.284443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.284553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.284584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.284751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.284782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.284981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.285015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.285184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.285216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.285330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.285362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.285476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.285508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.285627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.285659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.285894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.285925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.286138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.286171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.286342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.286379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.286559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.286591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.286780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.286812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.286935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.286978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.287218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.287248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.287510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.287543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.287657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.287688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.287876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.287909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.288116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.288149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.288322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.288353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.288588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.288619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.288790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.288821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.289028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.289062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.289247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.289280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.289464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.289495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.289620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.289651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.289879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.289911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.290156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.290189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.290324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.290355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.290468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.186 [2024-11-20 09:59:37.290499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.186 qpair failed and we were unable to recover it. 00:27:14.186 [2024-11-20 09:59:37.290694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.290726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.290840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.290871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.290980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.291013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.291225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.291258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.291447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.291479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.291724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.291756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.291944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.292008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.292185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.292217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.292341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.292373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.292611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.292643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.292811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.292842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.293012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.293045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.293257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.293289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.293574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.293605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.293724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.293756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.293940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.293981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.294093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.294125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.294388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.294418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.294620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.294652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.294780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.294812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.295052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.295091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.295269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.295300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.295565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.295597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.295733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.295765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.295942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.295981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.296181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.296213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.296400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.296431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.296533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.296564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.296687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.296718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.296893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.296926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.297203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.297236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.297368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.297399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.297576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.297608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.297795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.297827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.298100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.298134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.298338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.298369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.298563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.298593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.298715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.187 [2024-11-20 09:59:37.298746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.187 qpair failed and we were unable to recover it. 00:27:14.187 [2024-11-20 09:59:37.298852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.298884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.299122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.299154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.299345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.299376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.299479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.299510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.299770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.299802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.299984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.300017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.300190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.300222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.300330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.300362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.300566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.300597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.300795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.300827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.301088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.301120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.301262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.301294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.301408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.301440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.301609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.301641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.301830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.301862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.302033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.302066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.302193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.302225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.302345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.302377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.302485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.302517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.302640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.302673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.302849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.302882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.303065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.303099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.303278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.303316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.303419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.303452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.303556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.303588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.303762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.303794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.303968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.304007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.304189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.304221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.304387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.304420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.304663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.304695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.304845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.304876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.304984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.305017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.305135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.305169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.305353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.305384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.305630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.305663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.305831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.305864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.306001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.306034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.188 [2024-11-20 09:59:37.306215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.188 [2024-11-20 09:59:37.306246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.188 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.306357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.306389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.306640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.306671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.306788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.306820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.307072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.307105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.307291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.307323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.307447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.307478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.307603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.307635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.307901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.307931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.308123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.308156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.308331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.308362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.308479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.308511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.308786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.308818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.309060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.309093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.309218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.309250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.309487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.309519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.309718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.309751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.309864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.309894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.310167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.310200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.310322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.310353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.310521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.310553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.310702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.310734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.310916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.310956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.311196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.311228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.311340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.311372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.311499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.311536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.311718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.311749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.311978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.312011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.312122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.312152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.312321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.312352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.312490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.312523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.312702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.312733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.312995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.313029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.313198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.189 [2024-11-20 09:59:37.313231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.189 qpair failed and we were unable to recover it. 00:27:14.189 [2024-11-20 09:59:37.313470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.313502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.313651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.313683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.313875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.313907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.314100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.314133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.314313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.314345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.314489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.314522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.314761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.314793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.314940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.314980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.315218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.315250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.315424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.315456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.315576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.315607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.315730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.315763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.315881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.315913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.316067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.316099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.316363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.316395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.316659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.316693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.316804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.316837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.316958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.316992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.317122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.317155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.317284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.317315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.317487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.317518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.317692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.317723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.317897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.317929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.318121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.318153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.318337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.318368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.318627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.318659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.318902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.318934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.319083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.319115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.319289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.319321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.319440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.319470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.319662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.319693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.319874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.319911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.320105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.320138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.320259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.320290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.320486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.320518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.320706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.320738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.320935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.320978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.190 [2024-11-20 09:59:37.321178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.190 [2024-11-20 09:59:37.321210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.190 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.321381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.321413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.321520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.321551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.321720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.321751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.321875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.321907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.322106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.322139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.322338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.322369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.322509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.322540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.322727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.322758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.322945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.322988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.323093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.323126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.323235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.323267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.323386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.323416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.323627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.323658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.323840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.323871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.324007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.324042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.324178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.324210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.324391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.324422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.324640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.324672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.324861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.324893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.325098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.325130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.325386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.325454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.325603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.325639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.325829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.325861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.326104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.326138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.326321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.326353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.326525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.326557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.326730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.326762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.326941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.326982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.327246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.327276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.327449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.327481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.327607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.327638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.327906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.327937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.328063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.328095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.328301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.328333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.328464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.328495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.328668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.328700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.328935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.328979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.191 qpair failed and we were unable to recover it. 00:27:14.191 [2024-11-20 09:59:37.329196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.191 [2024-11-20 09:59:37.329228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.329408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.329440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.329637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.329668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.329903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.329934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.330079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.330112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.330297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.330328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.330515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.330547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.330762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.330795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.330910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.330942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.331084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.331117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.331287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.331324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.331451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.331481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.331661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.331691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.331870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.331901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.332094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.332127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.332319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.332351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.332482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.332513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.332641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.332672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.332932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.332974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.333237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.333270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.333508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.333539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.333719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.333749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.333873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.333904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.334091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.334123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.334302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.334334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.334438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.334469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.334593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.334624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.334808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.334840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.334963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.334996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.335109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.335141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.335250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.335281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.335451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.335481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.335739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.335769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.335941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.335984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.336271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.336302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.336539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.336572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.336704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.336735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.336930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.336979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.192 [2024-11-20 09:59:37.337160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.192 [2024-11-20 09:59:37.337193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.192 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.337373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.337405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.337520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.337550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.337653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.337684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.337803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.337834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.338002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.338035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.338140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.338172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.338281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.338313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.338451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.338482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.338655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.338687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.338874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.338906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.339020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.339052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.339236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.339267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.339442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.339515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.339650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.339686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.339937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.339989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.340179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.340211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.340337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.340368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.340557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.340588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.340775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.340806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.340974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.341008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.341243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.341274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.341450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.341481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.341628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.341659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.341837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.341868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.341974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.342023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.342218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.342258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.342493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.342524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.342724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.342755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.342936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.342994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.343198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.343229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.343401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.343433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.343562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.343593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.343781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.343813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.343945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.344004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.344126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.344158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.344271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.344303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.344412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.344443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.193 [2024-11-20 09:59:37.344623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.193 [2024-11-20 09:59:37.344654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.193 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.344892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.344924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.345069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.345101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.345342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.345374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.345645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.345678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.345906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.345938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.346159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.346191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.346451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.346483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.346601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.346633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.346759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.346792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.346969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.347003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.347176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.347208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.347379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.347410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.347594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.347626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.347862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.347895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.348025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.348063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.348300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.348333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.348600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.348631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.348856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.348887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.349009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.349041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.349248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.349281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.349559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.349591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.349729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.349760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.349971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.350004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.350138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.350169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.350356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.350387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.350562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.350594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.350796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.350828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.351066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.351099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.351322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.351357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.351559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.351590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.351844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.351875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.194 qpair failed and we were unable to recover it. 00:27:14.194 [2024-11-20 09:59:37.352041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.194 [2024-11-20 09:59:37.352073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.352201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.352232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.352419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.352450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.352571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.352601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.352781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.352812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.353002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.353034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.353140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.353171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.353285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.353316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.353495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.353525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.353713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.353744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.353982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.354022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.354138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.354168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.354408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.354439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.354568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.354599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.354796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.354827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.354958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.354991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.355161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.355192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.355375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.355405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.355606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.355638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.355839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.355869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.356006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.356038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.356165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.356195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.356450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.356481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.356653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.356684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.356866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.356898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.357094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.357128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.357299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.357330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.357462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.357492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.357719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.357751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.357871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.357902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.358195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.358227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.358488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.358518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.358700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.358731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.358898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.358929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.359141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.359174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.359308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.359339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.359524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.359554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.359742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.195 [2024-11-20 09:59:37.359774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.195 qpair failed and we were unable to recover it. 00:27:14.195 [2024-11-20 09:59:37.359909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.359941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.360127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.360159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.360394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.360425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.360548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.360580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.360792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.360823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.361007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.361040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.361174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.361205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.361374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.361404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.361517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.361548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.361787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.361818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.361940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.361981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.362161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.362192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.362363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.362399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.362648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.362679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.362921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.362963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.363226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.363257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.363395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.363426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.363602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.363633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.363827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.363859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.364041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.364073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.364252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.364283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.364418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.364449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.364560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.364592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.364781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.364811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.365022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.365055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.365239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.365270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.365398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.365430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.365637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.365668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.365893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.365923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.366147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.366179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.366386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.366416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.366629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.366660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.366911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.366943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.367076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.367107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.367362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.367393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.367510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.367541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.367777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.367808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.196 [2024-11-20 09:59:37.367982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.196 [2024-11-20 09:59:37.368014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.196 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.368185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.368216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.368345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.368377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.368504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.368536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.368652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.368682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.368886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.368918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.369157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.369227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.369462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.369499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.369752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.369784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.369969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.370002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.370118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.370150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.370275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.370307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.370440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.370471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.370729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.370762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.370946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.370992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.371109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.371140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.371425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.371456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.371627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.371659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.371855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.371885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.372123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.372155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.372286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.372317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.372488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.372521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.372651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.372682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.372872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.372903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.373103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.373135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.373377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.373408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.373530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.373561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.373727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.373759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.373868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.373899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.374094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.374131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.374333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.374364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.374555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.374585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.374779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.374809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.375046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.375079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.375278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.375309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.375480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.375513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.375671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.375703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.375810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.375842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.197 [2024-11-20 09:59:37.376027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.197 [2024-11-20 09:59:37.376060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.197 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.376188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.376219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.376416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.376447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.376680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.376711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.376921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.376963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.377163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.377195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.377431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.377462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.377640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.377672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.377856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.377887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.378100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.378133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.378327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.378359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.378545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.378575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.378696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.378727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.378860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.378892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.379158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.379190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.379427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.379459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.379634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.379665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.379905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.379936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.380049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.380080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.380209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.380241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.380430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.380460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.380695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.380725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.380896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.380927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.381071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.381103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.381321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.381352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.381550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.381583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.381691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.381724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.381861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.381893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.382120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.382152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.382272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.382303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.382561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.382592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.382755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.382787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.382919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.382961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.383173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.383204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.383408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.383440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.383548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.383580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.383753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.383784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.383969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.384003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.198 [2024-11-20 09:59:37.384117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.198 [2024-11-20 09:59:37.384149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.198 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.384278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.384310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.384415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.384447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.384661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.384692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.384928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.384969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.385139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.385170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.385342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.385372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.385501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.385531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.385717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.385750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.385861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.385890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.386007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.386039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.386208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.386238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.386474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.386505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.386754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.386785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.386967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.386999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.387127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.387159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.387398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.387428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.387543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.387574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.387684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.387713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.387830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.387859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.388071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.388104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.388234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.388275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.388453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.388483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.388718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.388747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.388944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.388984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.389193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.389225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.389400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.389431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.389690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.389720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.389983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.390022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.390212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.390245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.390510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.390540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.390775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.390806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.391056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.391087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.391291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.391321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.199 [2024-11-20 09:59:37.391518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.199 [2024-11-20 09:59:37.391548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.199 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.391775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.391806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.392016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.392049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.392332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.392363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.392489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.392519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.392641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.392670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.392926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.392968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.393083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.393115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.393282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.393310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.393481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.393510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.393698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.393729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.393909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.393940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.394067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.394097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.394276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.394306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.394422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.394453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.394655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.394686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.394868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.394901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.395037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.395070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.395242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.395272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.395543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.395575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.395761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.395791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.395971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.396004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.396129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.396161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.396342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.396372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.396584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.396616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.396789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.396821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.396938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.396977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.397093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.397125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.397391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.397464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.397608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.397644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.397823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.397855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.398025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.398060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.398328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.398360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.398529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.398560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.398746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.398778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.398903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.398935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.399136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.399168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.399289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.399320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.200 [2024-11-20 09:59:37.399512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.200 [2024-11-20 09:59:37.399544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.200 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.399663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.399694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.399805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.399837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.400041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.400074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.400278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.400310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.400562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.400593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.400693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.400725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.400912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.400944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.401072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.401103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.401290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.401322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.401504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.401535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.401770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.401802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.401917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.401959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.402199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.402230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.402352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.402382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.402620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.402650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.402852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.402884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.403083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.403116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.403308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.403339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.403441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.403472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.403654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.403685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.403876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.403907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.404023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.404056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.404170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.404202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.404373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.404404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.404580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.404611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.404810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.404843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.405025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.405058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.405232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.405264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.405520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.405552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.405671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.405713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.405820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.405852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.406021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.406055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.406267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.406298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.406557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.406587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.406754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.406786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.406977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.407009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.407182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.407213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.201 qpair failed and we were unable to recover it. 00:27:14.201 [2024-11-20 09:59:37.407339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.201 [2024-11-20 09:59:37.407370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.407553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.407586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.407827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.407857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.408095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.408128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.408391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.408423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.408662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.408693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.408877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.408910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.409036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.409069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.409260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.409291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.409498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.409528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.409666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.409697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.409868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.409900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.410043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.410077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.410199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.410231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.410507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.410537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.410659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.410691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.410867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.410898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.411104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.411137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.411315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.411347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.411463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.411493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.411751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.411784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.411969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.412003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.412115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.412147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.412342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.412373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.412486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.412519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.412702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.412734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.412991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.413023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.413150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.413181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.413440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.413471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.413660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.413693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.413813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.413844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.414030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.414062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.414275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.414312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.414491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.414522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.414703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.414735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.414869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.414900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.415015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.415048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.415148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.202 [2024-11-20 09:59:37.415179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.202 qpair failed and we were unable to recover it. 00:27:14.202 [2024-11-20 09:59:37.415365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.415397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.415582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.415614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.415793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.415824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.415996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.416028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.416216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.416248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.416376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.416407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.416609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.416642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.416817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.416848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.417050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.417084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.417234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.417266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.417378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.417408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.417591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.417622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.417787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.417819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.418059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.418091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.418223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.418254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.418447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.418478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.418792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.418824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.418941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.418981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.419170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.419201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.419446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.419477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.419805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.419835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.420032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.420066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.420216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.420246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.420360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.420391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.420594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.420625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.420748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.420779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.420882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.420913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.421131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.421165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.421283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.421315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.421501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.421533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.421730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.421762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.422004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.422042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.422225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.203 [2024-11-20 09:59:37.422256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.203 qpair failed and we were unable to recover it. 00:27:14.203 [2024-11-20 09:59:37.422447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.422479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.422589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.422625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.422812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.422843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.422967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.422999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.423172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.423203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.423327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.423359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.423664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.423695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.423881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.423912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.424188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.424222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.424352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.424383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.424582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.424613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.424795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.424826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.425023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.425055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.425171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.425202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.425334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.425366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.425558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.425589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.425829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.425861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.426145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.426177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.426387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.426418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.426598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.426629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.426877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.426910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.427031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.427064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.427245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.427277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.427451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.427483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.427653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.427684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.427804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.427836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.428033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.428065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.428184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.428215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.428461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.428493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.428733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.428765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.428889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.428919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.429058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.429089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.429374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.429405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.429641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.429674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.429779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.429810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.429940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.204 [2024-11-20 09:59:37.429993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.204 qpair failed and we were unable to recover it. 00:27:14.204 [2024-11-20 09:59:37.430131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.430162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.430345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.430375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.430546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.430578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.430753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.430783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.430982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.431015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.431122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.431161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.431437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.431469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.431581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.431612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.431815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.431847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.432026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.432059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.432228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.432261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.432363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.432394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.432656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.432688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.432869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.432900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.433081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.433113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.433242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.433273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.433452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.433483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.433653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.433685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.433875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.433905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.434043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.434076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.434198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.434228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.434350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.434382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.434552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.434584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.434755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.434785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.434907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.434938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.435144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.435176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.435431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.435462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.435590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.435622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.435890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.435921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.436126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.436158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.436281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.436313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.436508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.436540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.436804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.436835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.437017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.437049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.437226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.437257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.437385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.437417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.437542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.437573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.437817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.437847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.438022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.438055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.438231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.438262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.438392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.205 [2024-11-20 09:59:37.438424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.205 qpair failed and we were unable to recover it. 00:27:14.205 [2024-11-20 09:59:37.438666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.438696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.438930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.438971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.439210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.439241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.439436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.439467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.439725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.439761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.439960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.439993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.440198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.440228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.440415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.440445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.440614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.440646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.440751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.440782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.440900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.440932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.441189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.441222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.441406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.441436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.441568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.441599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.441729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.441760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.441892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.441924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.442137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.442170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.442284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.442314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.442524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.442555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.442682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.442713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.442840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.442870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.443051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.443084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.443262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.443293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.443493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.443524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.443740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.443770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.444013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.444045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.444290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.444321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.444534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.444565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.444760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.444793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.444923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.444976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.445152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.445185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.445367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.445398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.445579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.445609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.445814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.445846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.445995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.446049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.446233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.446265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.446508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.446540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.446667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.446698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.446959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.446992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.447171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.447201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.447303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.447334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.447536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.447567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.447773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.206 [2024-11-20 09:59:37.447805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.206 qpair failed and we were unable to recover it. 00:27:14.206 [2024-11-20 09:59:37.448038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.448071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.448241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.448284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.448392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.448422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.448682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.448714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.448825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.448856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.449029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.449061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.449246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.449277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.449426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.449458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.449713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.449744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.449866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.449898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.450088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.450121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.450302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.450333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.450512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.450544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.450672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.450703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.450820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.450851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.451030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.451063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.451174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.451204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.451370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.451401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.451591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.451623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.451743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.451774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.451958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.451992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.452256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.452286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.452410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.452441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.452615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.452646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.452767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.452797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.452970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.453002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.453189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.453221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.453467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.453499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.453632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.453664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.453767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.453797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.453976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.454010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.454140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.454171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.454279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.454310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.454432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.454463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.454595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.454627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.454731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.454762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.454940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.456023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.456487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.456564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.456800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.456836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.457102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.457138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.207 [2024-11-20 09:59:37.457250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.207 [2024-11-20 09:59:37.457282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.207 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.457542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.457582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.457765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.457798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.458056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.458091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.458338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.458369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.458584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.458614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.458789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.458820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.459033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.459067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.459241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.459272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.459398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.459430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.459680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.459710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.460006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.460040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.460245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.460277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.460405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.460436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.460553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.460583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.460722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.460763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.461031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.461063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.461239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.461271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.461496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.461527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.461719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.461751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.461882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.461913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.462048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.462080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.462197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.462228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.462345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.462377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.462500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.462531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.462768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.462798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.462918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.462959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.463073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.463104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.463225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.463263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.463546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.463578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.463700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.463731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.463954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.463986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.464219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.464249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.464471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.464503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.464628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.464658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.464945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.464988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.465127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.465159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.465272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.465303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.465488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.208 [2024-11-20 09:59:37.465518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.208 qpair failed and we were unable to recover it. 00:27:14.208 [2024-11-20 09:59:37.465696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.209 [2024-11-20 09:59:37.465728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.209 qpair failed and we were unable to recover it. 00:27:14.209 [2024-11-20 09:59:37.465917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.209 [2024-11-20 09:59:37.465959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.209 qpair failed and we were unable to recover it. 00:27:14.209 [2024-11-20 09:59:37.466078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.209 [2024-11-20 09:59:37.466109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.209 qpair failed and we were unable to recover it. 00:27:14.209 [2024-11-20 09:59:37.466351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.209 [2024-11-20 09:59:37.466384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.466492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.466525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.466728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.466760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.466942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.466986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.467161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.467192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.467308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.467338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.467592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.467623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.467740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.467771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.467990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.468023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.468165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.468196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.468387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.468418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.468534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.468566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.468756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.468788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.468968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.469002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.469174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.469204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.469396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.469427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.469624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.469655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.469770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.469801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.469976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.470009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.470324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.470356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.470479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.470510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.470689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.470720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.470839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.470870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.471058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.471091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.471217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.471247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.497 [2024-11-20 09:59:37.471517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.497 [2024-11-20 09:59:37.471548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.497 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.471685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.471722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.471906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.471937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.472052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.472084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.472199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.472230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.472432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.472462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.472709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.472739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.472926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.472967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.473220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.473251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.473365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.473396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.473578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.473609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.473794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.473824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.474031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.474063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.474188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.474219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.474398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.474429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.474608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.474640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.474902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.474933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.475078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.475110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.475277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.475309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.475546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.475577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.475746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.475777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.475984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.476016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.476208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.476240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.476429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.476460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.476590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.476620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.476810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.476841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.477024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.477057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.477256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.477287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.477569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.477601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.477842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.477872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.478163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.478196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.478447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.478478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.478667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.478698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.478959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.478991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.479197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.479228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.479396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.479427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.479685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.498 [2024-11-20 09:59:37.479716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.498 qpair failed and we were unable to recover it. 00:27:14.498 [2024-11-20 09:59:37.479928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.479966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.480156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.480199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.480396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.480427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.480626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.480657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.480856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.480894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.481100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.481132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.481324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.481356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.481541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.481572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.481758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.481788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.481969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.482002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.482118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.482149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.482413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.482443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.482555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.482586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.482783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.482814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.483059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.483091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.483211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.483242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.483419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.483451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.483570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.483602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.483748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.483780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.483904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.483935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.484212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.484245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.484358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.484389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.484575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.484606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.484908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.484939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.485053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.485085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.485341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.485373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.485546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.485578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.485705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.485737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.485938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.485980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.486178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.486209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.486450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.486481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.486727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.486758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.486897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.486929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.487177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.487209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.487426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.487457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.487694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.487725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.487896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.499 [2024-11-20 09:59:37.487926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.499 qpair failed and we were unable to recover it. 00:27:14.499 [2024-11-20 09:59:37.488184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.488217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.488419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.488450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.488656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.488688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.488793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.488823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.489040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.489073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.489179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.489211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.489398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.489429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.489611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.489647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.489761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.489793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.489987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.490019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.490141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.490172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.490302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.490332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.490599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.490629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.490816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.490848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.491035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.491068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.491276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.491307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.491438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.491469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.491589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.491620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.491746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.491777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.491972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.492003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.492115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.492147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.492323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.492355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.492592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.492623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.492834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.492864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.493047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.493078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.493248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.493280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.493410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.493441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.493639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.493670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.493835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.493866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.494070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.494101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.494281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.494312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.494518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.494550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.494737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.494767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.494962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.494994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.495206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.495238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.495492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.495523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.495713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.495743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.496009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.500 [2024-11-20 09:59:37.496041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.500 qpair failed and we were unable to recover it. 00:27:14.500 [2024-11-20 09:59:37.496226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.496263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.496382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.496413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.496516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.496547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.496732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.496764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.496894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.496924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.497121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.497153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.497275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.497305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.497569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.497600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.497739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.497770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.498052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.498090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.498194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.498226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.498327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.498359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.498528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.498558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.498690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.498722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.498832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.498864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.499059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.499093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.499280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.499311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.499502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.499534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.499665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.499697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.499868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.499898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.500145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.500176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.500349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.500380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.500602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.500633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.500846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.500877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.501000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.501032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.501201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.501230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.501419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.501449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.501637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.501669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.501858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.501889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.502021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.502052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.502235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.502265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.502414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.502444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.501 qpair failed and we were unable to recover it. 00:27:14.501 [2024-11-20 09:59:37.502737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.501 [2024-11-20 09:59:37.502769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.503008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.503041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.503150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.503182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.503379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.503410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.503544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.503576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.503772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.503802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.503923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.503962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.504082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.504113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.504298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.504329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.504447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.504477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.504589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.504620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.504814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.504846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.504959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.504991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.505273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.505304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.505427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.505458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.505587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.505618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.505760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.505790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.505904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.505935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.506129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.506161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.506368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.506399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.506519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.506550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.506722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.506752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.506868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.506898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.507165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.507197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.507381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.507412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.507585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.507616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.507825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.507857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.507976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.508008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.508244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.508274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.508454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.508484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.508715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.508745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.508991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.509024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.509224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.509256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.509515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.509545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.509712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.509742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.502 [2024-11-20 09:59:37.509920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.502 [2024-11-20 09:59:37.509958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.502 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.510197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.510228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.510333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.510363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.510546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.510577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.510766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.510796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.510911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.510942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.511106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.511136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.511304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.511334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.511499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.511529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.511725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.511761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.511958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.511990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.512237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.512268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.512438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.512469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.512653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.512684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.512804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.512835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.513004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.513036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.513153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.513185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.513308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.513339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.513571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.513602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.513780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.513811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.513921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.513961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.514149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.514179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.514359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.514390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.514574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.514606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.514843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.514874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.515033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.515066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.515319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.515349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.515479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.515509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.515687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.515718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.515899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.515930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.516047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.516079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.516340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.516370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.516629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.516660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.516870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.516902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.517104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.517136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.517286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.503 [2024-11-20 09:59:37.517317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.503 qpair failed and we were unable to recover it. 00:27:14.503 [2024-11-20 09:59:37.517442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.517473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.517656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.517687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.517874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.517905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.518092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.518123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.518362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.518394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.518580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.518610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.518728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.518758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.518988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.519022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.519193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.519224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.519347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.519377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.519489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.519520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.519699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.519730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.519829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.519860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.519973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.520011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.520194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.520225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.520344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.520375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.520553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.520584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.520792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.520823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.520940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.520979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.521114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.521145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.521325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.521355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.521540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.521571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.521673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.521704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.521965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.521998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.522177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.522208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.522396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.522427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.522549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.522579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.522701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.522733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.522942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.522985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.523153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.523183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.523372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.523403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.523678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.523709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.523825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.523855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.523986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.524019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.524119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.524150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.524328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.524358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.504 qpair failed and we were unable to recover it. 00:27:14.504 [2024-11-20 09:59:37.524616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.504 [2024-11-20 09:59:37.524647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.524884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.524916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.525059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.525091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.525201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.525232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.525347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.525378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.525545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.525575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.525782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.525814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.525934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.525974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.526144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.526175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.526414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.526444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.526616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.526648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.526884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.526914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.527106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.527138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.527241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.527272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.527482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.527512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.527626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.527657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.527917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.527956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.528070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.528107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.528281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.528312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.528498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.528528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.528654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.528685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.528873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.528904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.529099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.529130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.529319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.529348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.529461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.529491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.529700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.529731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.529971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.530003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.530103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.530134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.530247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.530279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.530412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.530445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.530625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.530658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.530762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.530794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.530899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.530931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.531113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.531147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.531317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.531348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.531628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.505 [2024-11-20 09:59:37.531659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.505 qpair failed and we were unable to recover it. 00:27:14.505 [2024-11-20 09:59:37.531899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.531933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.532139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.532173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.532290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.532323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.532503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.532535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.532771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.532803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.532986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.533018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.533257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.533289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.533410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.533442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.533572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.533606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.533776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.533808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.533911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.533943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.534135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.534166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.534317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.534350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.534517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.534548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.534672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.534705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.534919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.534960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.535063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.535096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.535266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.535298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.535540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.535573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.535837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.535869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.536048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.536082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.536230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.536272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.536455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.536488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.536672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.536704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.536883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.536915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.537093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.537127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.537306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.537339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.537521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.537552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.537670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.537702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.537886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.537918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.538110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.538142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.506 [2024-11-20 09:59:37.538265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.506 [2024-11-20 09:59:37.538297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.506 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.538417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.538449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.538549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.538581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.538756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.538788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.538918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.538957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.539165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.539197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.539306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.539339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.539610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.539642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.539835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.539867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.540070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.540103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.540280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.540313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.540493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.540526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.540660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.540692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.540798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.540832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.541122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.541156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.541332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.541364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.541487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.541520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.541712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.541745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.541923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.541969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.542158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.542191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.542318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.542350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.542608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.542639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.542777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.542809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.542982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.543015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.543147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.543180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.543277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.543310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.543424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.543457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.543657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.543688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.543928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.543972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.544144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.544177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.544355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.544393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.544574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.544605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.544805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.544837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.545089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.545122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.545300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.545332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.545507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.545539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.545712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.507 [2024-11-20 09:59:37.545745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.507 qpair failed and we were unable to recover it. 00:27:14.507 [2024-11-20 09:59:37.545969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.546004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.546215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.546248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.546381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.546412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.546668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.546701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.546880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.546913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.547167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.547201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.547370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.547402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.547610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.547641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.547821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.547853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.548034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.548067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.548238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.548270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.548380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.548412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.548597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.548629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.548743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.548775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.548991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.549026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.549272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.549306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.549515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.549547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.549651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.549684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.549801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.549833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.550009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.550042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.550238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.550271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.550439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.550471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.550727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.550759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.551037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.551069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.551184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.551216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.551384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.551415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.551616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.551648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.551836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.551868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.552070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.552103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.552312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.552344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.552518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.552550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.552804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.552837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.552960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.552994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.553106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.553143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.553353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.553384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.553517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.553550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.553667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.508 [2024-11-20 09:59:37.553699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.508 qpair failed and we were unable to recover it. 00:27:14.508 [2024-11-20 09:59:37.553809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.553841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.554046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.554081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.554262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.554295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.554421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.554453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.554634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.554668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.554853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.554884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.555069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.555103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.555284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.555316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.555502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.555535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.555715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.555748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.555882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.555914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.556029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.556063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.556237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.556269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.556440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.556472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.556712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.556744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.556946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.557009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.557188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.557221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.557323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.557354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.557534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.557566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.557737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.557769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.557892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.557923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.558190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.558222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.558506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.558538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.558803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.558836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.558969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.559004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.559240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.559272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.559373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.559405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.559536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.559775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.559807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.560043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.560076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.560311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.560343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.560526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.560558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.560762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.560794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.560925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.560964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.561091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.561122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.561381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.509 [2024-11-20 09:59:37.561413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.509 qpair failed and we were unable to recover it. 00:27:14.509 [2024-11-20 09:59:37.561653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.561691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.561886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.561918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.562054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.562086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.562256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.562288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.562391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.562423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.562544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.562576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.562744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.562778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.563013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.563047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.563218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.563250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.563434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.563467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.563594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.563625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.563887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.563919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.564177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.564209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.564324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.564356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.564566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.564599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.564787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.564819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.565011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.565044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.565283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.565315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.565446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.565478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.565691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.565724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.565903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.565936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.566126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.566159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.566307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.566339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.566456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.566488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.566610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.566643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.566880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.566913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.567097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.567131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.567310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.567385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.567643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.567678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.567810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.567844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.568033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.568070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.568334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.568365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.568569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.568602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.568727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.568759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.568945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.568993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.569178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.569210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.569388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.510 [2024-11-20 09:59:37.569421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.510 qpair failed and we were unable to recover it. 00:27:14.510 [2024-11-20 09:59:37.569609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.569641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.569828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.569860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.569982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.570016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.570184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.570217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.570349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.570382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.570552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.570585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.570822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.570854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.570980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.571015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.571142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.571174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.571289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.571322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.571435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.571468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.571656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.571688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.571869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.571902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.572116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.572149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.572319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.572351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.572463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.572495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.572694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.572726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.572924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.572975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.573211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.573243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.573479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.573512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.573713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.573744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.573931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.573974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.574162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.574195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.574418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.574450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.574636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.574668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.574959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.574993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.575166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.575198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.575379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.575411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.575648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.575679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.575848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.575880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.576068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.576101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.576369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.576402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.576595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.511 [2024-11-20 09:59:37.576627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.511 qpair failed and we were unable to recover it. 00:27:14.511 [2024-11-20 09:59:37.576833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.576865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.576986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.577020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.577137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.577169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.577279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.577311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.577500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.577532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.577710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.577741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.577930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.577977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.578097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.578128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.578298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.578328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.578514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.578546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.578765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.578797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.578905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.578943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.579063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.579094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.579214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.579247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.579420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.579452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.579619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.579650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.579817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.579848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.580031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.580065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.580301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.580333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.580504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.580536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.580722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.580754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.580920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.580959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.581218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.581249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.581435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.581466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.581703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.581734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.581924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.581963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.582086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.582117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.582240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.582271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.582447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.582479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.582672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.582704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.582889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.582921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.583119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.583151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.583365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.583398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.583567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.583600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.583780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.583811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.583989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.584021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.584204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.512 [2024-11-20 09:59:37.584235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.512 qpair failed and we were unable to recover it. 00:27:14.512 [2024-11-20 09:59:37.584501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.584532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.584817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.584849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.585041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.585075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.585240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.585273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.585390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.585422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.585526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.585557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.585791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.585823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.586021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.586056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.586171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.586203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.586378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.586411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.586630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.586661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.586921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.586962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.587149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.587182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.587359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.587392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.587522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.587555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.587732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.587769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.587970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.588003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.588130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.588161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.588332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.588365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.588492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.588524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.588655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.588685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.588860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.588893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.589154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.589187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.589446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.589480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.589669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.589700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.589898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.589930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.590118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.590151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.590391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.590421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.590534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.590566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.590688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.590720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.590902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.590933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.591157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.591190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.591392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.591423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.591606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.591637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.591884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.591916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.592135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.592167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.592401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.592433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.513 qpair failed and we were unable to recover it. 00:27:14.513 [2024-11-20 09:59:37.592629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.513 [2024-11-20 09:59:37.592660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.592839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.592871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.593059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.593094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.593338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.593370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.593493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.593526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.593724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.593762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.593988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.594022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.594197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.594228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.594399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.594431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.594608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.594640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.594822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.594854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.595030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.595063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.595249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.595280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.595455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.595486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.595595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.595626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.595862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.595895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.596169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.596204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.596390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.596423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.596606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.596638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.596873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.596963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.597163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.597201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.597409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.597443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.597560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.597593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.597783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.597817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.597987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.598022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.598212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.598245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.598436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.598468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.598590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.598624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.598740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.598772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.598979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.599013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.599197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.599230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.599415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.599447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.599629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.599672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.599842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.599875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.600079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.600113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.600351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.600383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.600508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.514 [2024-11-20 09:59:37.600540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.514 qpair failed and we were unable to recover it. 00:27:14.514 [2024-11-20 09:59:37.600775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.600807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.600988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.601021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.601304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.601337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.601614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.601647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.601835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.601868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.602053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.602088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.602276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.602308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.602536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.602569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.602680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.602711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.602962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.602997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.603122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.603155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.603323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.603355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.603462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.603493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.603679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.603712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.603898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.603931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.604046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.604079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.604282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.604314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.604524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.604557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.604681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.604714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.604962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.604996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.605099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.605131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.605266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.605299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.605577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.605653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.605890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.605926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.606133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.606172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.606422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.606454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.606647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.606680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.606850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.606881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.607146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.607180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.607358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.607391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.607526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.607557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.607736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.607769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.607974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.608006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.608268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.608301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.608480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.608511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.608640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.608671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.608793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.608825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.609013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.515 [2024-11-20 09:59:37.609046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.515 qpair failed and we were unable to recover it. 00:27:14.515 [2024-11-20 09:59:37.609223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.609254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.609505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.609536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.609736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.609768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.609873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.609905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.610207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.610240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.610360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.610392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.610646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.610678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.610888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.610921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.611154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.611187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.611369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.611401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.611652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.611684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.611922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.611968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.612094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.612126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.612309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.612341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.612454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.612484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.612689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.612721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.612894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.612925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.613121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.613153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.613333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.613364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.613495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.613527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.613696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.613730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.613898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.613929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.614063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.614095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.614223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.614254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.614432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.614463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.614571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.614602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.614776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.614808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.614973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.615008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.615116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.615147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.615281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.615313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.615487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.615521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.615650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.615681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.615874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.516 [2024-11-20 09:59:37.615907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.516 qpair failed and we were unable to recover it. 00:27:14.516 [2024-11-20 09:59:37.616094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.616127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.616363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.616396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.616567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.616599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.616771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.616811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.616938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.616977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.617102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.617134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.617331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.617365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.617547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.617579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.617861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.617893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.618088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.618121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.618298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.618329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.618534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.618566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.618701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.618733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.618968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.619002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.619193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.619225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.619404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.619436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.619620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.619652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.619834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.619866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.620118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.620152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.620385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.620458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.620776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.620812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.621074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.621109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.621234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.621268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.621519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.621551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.621815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.621848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.622042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.622076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.622262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.622294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.622421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.622452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.622622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.622654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.622823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.622855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.623026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.623058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.623261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.623293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.623414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.623457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.623632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.623663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.623898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.623930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.624182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.624215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.517 qpair failed and we were unable to recover it. 00:27:14.517 [2024-11-20 09:59:37.624446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.517 [2024-11-20 09:59:37.624478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.624737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.624769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.624899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.624932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.625085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.625118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.625302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.625335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.625553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.625586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.625775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.625807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.625929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.625977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.626087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.626119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.626359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.626391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.626519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.626552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.626661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.626695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.626963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.626997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.627114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.627147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.627333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.627366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.627605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.627639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.627892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.627925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.628145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.628178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.628358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.628391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.628702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.628910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.628943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.629133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.629166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.629307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.629340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.629598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.629632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.629748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.629782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.630050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.630083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.630257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.630290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.630473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.630506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.630690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.630722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.630927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.630968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.631095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.631127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.631364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.631396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.631658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.631690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.631954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.631988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.632158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.632190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.632316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.632348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.632535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.632568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.632766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.518 [2024-11-20 09:59:37.632799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.518 qpair failed and we were unable to recover it. 00:27:14.518 [2024-11-20 09:59:37.632930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.632969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.633094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.633127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.633315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.633348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.633610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.633642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.633809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.633840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.634012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.634045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.634304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.634337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.634453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.634485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.634666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.634699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.634885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.634918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.635162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.635195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.635429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.635461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.635584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.635617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.635729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.635761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.635956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.635990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.636171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.636204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.636370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.636403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.636671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.636703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.636882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.636914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.637059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.637092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.637269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.637301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.637485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.637517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.637688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.637719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.637981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.638015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.638250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.638282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.638537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.638575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.638781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.638813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.638986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.639019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.639188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.639221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.639425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.639458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.639589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.639621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.639790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.639822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.640061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.640094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.640338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.640372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.640562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.640595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.640824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.640857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.641052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.641086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.519 qpair failed and we were unable to recover it. 00:27:14.519 [2024-11-20 09:59:37.641213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.519 [2024-11-20 09:59:37.641245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.641357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.641390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.641512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.641544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.641754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.641786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.641919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.641983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.642177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.642209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.642330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.642363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.642482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.642515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.642754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.642787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.642909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.642942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.643089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.643122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.643304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.643337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.643538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.643571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.643742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.643775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.643945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.643987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.644179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.644211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.644379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.644412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.644529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.644561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.644666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.644698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.644815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.644846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.645025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.645059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.645233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.645266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.645395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.645426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.645689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.645722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.645845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.645878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.646002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.646039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.646295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.646328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.646588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.646620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.646740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.646777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.647014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.647048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.647174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.647206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.647417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.647449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.520 qpair failed and we were unable to recover it. 00:27:14.520 [2024-11-20 09:59:37.647657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.520 [2024-11-20 09:59:37.647690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.647865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.647897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.648019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.648054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.648275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.648305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.648486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.648519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.648715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.648747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.648926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.648965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.649087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.649119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.649224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.649257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.649430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.649462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.649590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.649623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.649806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.649838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.650028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.650061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.650300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.650333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.650594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.650627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.650866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.650897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.651120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.651154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.651347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.651379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.651592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.651624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.651755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.651788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.652025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.652059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.652182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.652215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.652393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.652426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.652647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.652679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.652781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.652813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.653008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.653041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.653222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.653255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.653435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.653467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.653732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.653765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.653960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.654012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.654149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.654182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.654358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.654389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.654635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.654668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.654851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.654884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.655014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.655049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.655285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.655318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.655571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.521 [2024-11-20 09:59:37.655617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.521 qpair failed and we were unable to recover it. 00:27:14.521 [2024-11-20 09:59:37.655755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.655788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.655910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.655943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.656128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.656161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.656338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.656370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.656539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.656572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.656743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.656775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.656958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.656993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.657174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.657206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.657411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.657443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.657578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.657611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.657715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.657746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.657930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.657970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.658165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.658197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.658469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.658501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.658671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.658703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.658884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.658917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.659120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.659154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.659433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.659465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.659711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.659745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.659876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.659909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.660206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.660240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.660444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.660477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.660671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.660705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.660970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.661005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.661189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.661223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.661343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.661376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.661498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.661530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.661788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.661822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.662013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.662047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.662175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.662208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.662312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.662344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.662577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.662610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.662848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.662880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.663069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.663104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.663285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.663318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.663499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.663532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.663721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.522 [2024-11-20 09:59:37.663754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.522 qpair failed and we were unable to recover it. 00:27:14.522 [2024-11-20 09:59:37.663872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.663904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.664028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.664062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.664247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.664286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.664486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.664519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.664706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.664738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.664857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.664890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.665090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.665124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.665295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.665327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.665448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.665481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.665674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.665707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.665885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.665918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.666054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.666089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.666229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.666262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.666445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.666478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.666663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.666696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.666879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.666912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.667136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.667170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.667305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.667339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.667518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.667550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.667724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.667757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.667975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.668010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.668217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.668250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.668437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.668470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.668662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.668694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.668800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.668833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.669014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.669047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.669159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.669193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.669318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.669349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.669539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.669573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.669680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.669713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.669850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.669883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.670007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.670040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.670168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.670202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.670411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.670444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.670564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.670597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.670854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.670887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.670997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.523 [2024-11-20 09:59:37.671030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.523 qpair failed and we were unable to recover it. 00:27:14.523 [2024-11-20 09:59:37.671220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.671253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.671433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.671466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.671573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.671606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.671775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.671807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.671918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.671960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.672065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.672104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.672287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.672320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.672454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.672487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.672614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.672648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.672884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.672917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.673117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.673151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.673353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.673387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.673558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.673591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.673775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.673808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.674062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.674096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.674276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.674309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.674569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.674601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.674795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.674828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.675021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.675054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.675233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.675267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.675448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.675480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.675599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.675631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.675737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.675771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.676032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.676066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.676255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.676289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.676423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.676457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.676631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.676664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.676839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.676873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.677058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.677093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.677218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.677251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.677363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.677396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.677515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.677549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.677793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.677826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.677997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.678031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.678299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.678334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.678519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.678550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.678661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.678692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.524 [2024-11-20 09:59:37.678864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.524 [2024-11-20 09:59:37.678896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.524 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.679022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.679057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.679234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.679267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.679476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.679510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.679730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.679764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.679955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.679988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.680115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.680146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.680387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.680420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.680706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.680744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.680920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.680963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.681211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.681243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.681375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.681408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.681527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.681559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.681678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.681712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.681957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.681992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.682116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.682151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.682337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.682369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.682481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.682515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.682630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.682661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.682786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.682819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.682999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.683033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.683213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.683246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.683435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.683468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.683594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.683626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.683749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.683782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.684015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.684047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.684221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.684254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.684394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.684428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.684611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.684643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.684768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.684801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.684918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.684977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.685218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.685251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.685387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.525 [2024-11-20 09:59:37.685421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.525 qpair failed and we were unable to recover it. 00:27:14.525 [2024-11-20 09:59:37.685595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.685626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.685804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.685838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.686029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.686064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.686251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.686285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.686464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.686496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.686630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.686662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.686897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.686929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.687120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.687154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.687463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.687495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.687681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.687714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.687832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.687864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.688048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.688082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.688201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.688232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.688355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.688389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.688605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.688638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.688810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.688850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.689112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.689147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.689252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.689284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.689510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.689542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.689647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.689679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.689792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.689825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.689999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.690033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.690241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.690273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.690452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.690483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.690607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.690640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.690777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.690809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.690921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.690959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.691081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.691114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.691317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.691350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.691466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.691499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.691694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.691727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.691902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.691934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.692071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.692104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.692358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.692391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.692509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.692542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.692667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.692699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.692869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.692902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.526 [2024-11-20 09:59:37.693056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.526 [2024-11-20 09:59:37.693091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.526 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.693213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.693246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.693374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.693407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.693668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.693700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.693805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.693837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.694024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.694059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.694179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.694211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.694385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.694417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.694525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.694566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.694691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.694721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.694858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.694889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.695018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.695049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.695236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.695269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.695439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.695471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.695711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.695743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.695986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.696018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.696264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.696296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.696536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.696569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.696754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.696798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.696913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.696955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.697228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.697262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.697502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.697535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.697645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.697679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.697853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.697886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.698081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.698115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.698246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.698279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.698417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.698450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.698581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.698615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.698813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.698846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.699084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.699119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.699240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.699273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.699387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.699420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.699604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.699636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.699752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.699784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.699965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.699998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.700117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.700150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.700336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.700368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.527 qpair failed and we were unable to recover it. 00:27:14.527 [2024-11-20 09:59:37.700490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.527 [2024-11-20 09:59:37.700522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.700764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.700798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.700984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.701018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.701254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.701287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.701546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.701579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.701704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.701737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.701918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.701960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.702090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.702123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.702429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.702501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.702629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.702666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.702909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.702943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.703072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.703104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.703280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.703312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.703494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.703644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.703678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.703861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.703893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.704032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.704065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.704244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.704276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.704400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.704432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.704672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.704705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.704900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.704932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.705223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.705257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.705533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.705566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.705759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.705792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.705928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.705973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.706097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.706128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.706305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.706338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.706514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.706545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.706792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.706824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.706936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.706980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.707150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.707183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.707308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.707339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.707578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.707610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.707739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.707770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.708008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.708042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.708234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.708271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.708507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.708540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.708671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.528 [2024-11-20 09:59:37.708702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.528 qpair failed and we were unable to recover it. 00:27:14.528 [2024-11-20 09:59:37.708886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.708917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.709043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.709076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.709337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.709369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.709485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.709518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.709647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.709678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.709856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.709887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.710127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.710162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.710295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.710327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.710512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.710544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.710746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.710779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.710904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.710935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.711187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.711218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.711327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.711357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.711490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.711523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.711704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.711734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.711865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.711897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.712072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.712105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.712277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.712309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.712419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.712452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.712618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.712650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.712839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.712872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.713040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.713076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.713189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.713220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.713419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.713452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.713628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.713665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.713788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.713820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.714079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.714113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.714292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.714324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.714436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.714467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.714585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.714616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.714733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.714765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.714941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.714983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.715157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.715189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.715427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.715460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.715635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.715668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.715840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.715872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.716009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.716042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.529 qpair failed and we were unable to recover it. 00:27:14.529 [2024-11-20 09:59:37.716149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.529 [2024-11-20 09:59:37.716184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.716373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.716404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.716515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.716547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.716662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.716694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.716814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.716846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.717064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.717098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.717297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.717331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.717453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.717484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.717655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.717687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.717791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.717823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.718040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.718074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.718194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.718226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.718348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.718381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.718571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.718603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.718774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.718807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.718985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.719018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.719207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.719241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.719434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.719467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.719641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.719673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.719848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.719880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.720069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.720102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.720298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.720331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.720501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.720533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.720713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.720744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.720985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.721019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.721198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.721230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.721351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.721383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.721568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.721601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.721862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.721900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.722015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.722048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.722245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.722278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.722398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.722429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.722554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.722585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.530 qpair failed and we were unable to recover it. 00:27:14.530 [2024-11-20 09:59:37.722826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.530 [2024-11-20 09:59:37.722858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.723056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.723090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.723355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.723387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.723507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.723539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.723652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.723684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.723966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.724001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.724188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.724220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.724350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.724382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.724481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.724511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.724686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.724718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.724887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.724918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.725183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.725256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.725413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.725451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.725635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.725669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.725792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.725827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.725971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.726005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.726194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.726227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.726337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.726370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.726609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.726641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.726831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.726864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.727165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.727199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.727445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.727478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.727609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.727652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.727834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.727867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.728127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.728161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.728285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.728317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.728451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.728483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.728683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.728715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.728892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.728925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.729212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.729246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.729434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.729467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.729640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.729672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.729924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.729966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.730223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.730257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.730371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.730404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.730575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.730607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.531 [2024-11-20 09:59:37.730821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.531 [2024-11-20 09:59:37.730854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.531 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.731028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.731062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.731248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.731282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.731463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.731496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.731615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.731649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.731895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.731928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.732119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.732151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.732413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.732445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.732560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.732594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.732764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.732796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.732967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.733003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.733242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.733276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.733459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.733492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.733674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.733707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.733823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.733856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.733967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.734001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.734204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.734236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.734361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.734394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.734637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.734669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.734773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.734806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.735001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.735034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.735227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.735258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.735521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.735555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.735677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.735709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.735888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.735921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.736172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.736204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.736471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.736509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.736615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.736662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.736798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.736829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.736942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.736995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.737237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.737269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.737380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.737413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.737601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.737634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.737881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.737914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.738104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.738137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.738255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.738287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.738398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.532 [2024-11-20 09:59:37.738432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.532 qpair failed and we were unable to recover it. 00:27:14.532 [2024-11-20 09:59:37.738638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.738672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.738874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.738908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.739017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.739051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.739260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.739295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.739504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.739536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.739663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.739697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.739814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.739847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.740033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.740067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.740256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.740290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.740458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.740491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.740627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.740659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.740840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.740872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.741152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.741194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.741307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.741340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.741484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.741518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.741685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.741728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.741855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.741889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.741995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.742028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.742140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.742172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.742291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.742324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.742563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.742596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.742835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.742867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.742978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.743017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.743152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.743185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.743386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.743419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.743594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.743627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.743820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.743852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.744047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.744081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.744193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.744226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.744499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.744536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.744710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.744748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.744930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.744977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.745169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.745201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.745382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.745413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.745590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.745621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.745742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.745774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.745968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.746002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.533 qpair failed and we were unable to recover it. 00:27:14.533 [2024-11-20 09:59:37.746129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.533 [2024-11-20 09:59:37.746163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.746336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.746368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.746561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.746593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.746719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.746751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.746864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.746896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.747076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.747109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.747293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.747325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.747535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.747567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.747677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.747710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.747830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.747868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.747971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.748006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.748210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.748243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.748506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.748539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.748682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.748720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.748932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.748985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.749094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.749127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.749294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.749327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.749445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.749479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.749594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.749628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.749782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.749856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.750105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.750145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.750339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.750372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.750615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.750648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.750754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.750786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.750899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.750931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.751138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.751172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.751353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.751387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.751500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.751531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.751716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.751747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.751915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.751955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.752083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.752117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.752359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.752393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.752579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.752612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.752797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.752829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.534 qpair failed and we were unable to recover it. 00:27:14.534 [2024-11-20 09:59:37.752936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.534 [2024-11-20 09:59:37.752980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.753175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.753208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.753395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.753428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.753621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.753655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.753833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.753864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.754040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.754074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.754259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.754291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.754418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.754451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.754560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.754593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.754834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.754865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.755007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.755042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.755196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.755229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.755337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.755381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.755568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.755603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.755710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.755742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.755853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.755886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.756006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.756040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.756236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.756269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.756515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.756550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.756682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.756716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.756987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.757019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.757152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.757184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.757445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.757478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.757667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.757699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.757843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.757876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.758049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.758082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.758193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.758227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.758331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.758362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.758550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.758581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.758706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.758739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.758872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.758903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.759045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.759080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.759289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.759323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.759437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.759471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.759647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.535 [2024-11-20 09:59:37.759680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.535 qpair failed and we were unable to recover it. 00:27:14.535 [2024-11-20 09:59:37.759874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.759906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.760044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.760077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.760211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.760244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.760426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.760458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.760629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.760668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.760769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.760801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.761043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.761077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.761280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.761312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.761492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.761525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.761699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.761730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.761912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.761944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.762068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.762101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.762414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.762447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.762565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.762598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.762717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.762750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.762861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.762893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.763029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.763061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.763226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.763258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.763427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.763500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.763798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.763836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.763975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.764012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.764195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.764229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.764354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.764386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.764489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.764522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.764707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.764741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.764982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.765016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.765222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.765255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.765493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.765527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.765643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.765676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.765851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.765884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.766022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.766056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.766168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.766211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.766323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.766355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.766469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.766501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.766609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.766642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.766828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.766861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.767028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.536 [2024-11-20 09:59:37.767062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.536 qpair failed and we were unable to recover it. 00:27:14.536 [2024-11-20 09:59:37.767171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.767203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.767321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.767353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.767493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.767524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.767737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.767769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.767958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.767993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.768094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.768127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.768304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.768336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.768459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.768492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.768608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.768643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.768881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.768913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.769117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.769151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.769336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.769367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.769495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.769529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.769646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.769678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.769887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.769919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.770108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.770143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.770246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.770278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.770465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.770499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.770618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.770650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.770748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.770781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.770971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.771005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.771207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.771241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.771358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.771391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.771498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.771530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.771766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.771799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.771997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.772032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.772137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.772170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.772284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.772317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.772419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.772451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.772633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.772665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.772769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.772802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.773010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.773044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.773208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.773241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.773422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.773455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.773624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.773663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.773837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.773871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.537 [2024-11-20 09:59:37.774073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.537 [2024-11-20 09:59:37.774106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.537 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.774283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.774316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.774433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.774466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.774571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.774603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.774733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.774766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.775007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.775042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.775219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.775252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.775365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.775397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.775517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.775549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.775788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.775822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.775943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.775984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.776096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.776129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.776312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.776346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.776477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.776509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.776784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.776817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.776931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.776972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.777218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.777251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.777384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.777415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.777526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.777558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.777659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.777692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.777863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.777895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.778149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.778182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.778361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.778393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.778502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.778534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.778701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.778733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.778967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.779040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.779241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.779279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.779455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.779488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.779663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.779697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.779888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.779921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.780070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.780104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.780278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.780310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.780480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.780524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.780660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.780692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.780867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.780899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.781011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.781045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.781306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.781339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.781440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.781473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.781589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.538 [2024-11-20 09:59:37.781630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.538 qpair failed and we were unable to recover it. 00:27:14.538 [2024-11-20 09:59:37.781757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.781789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.781893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.781925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.782140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.782173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.782369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.782402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.782593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.782625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.782741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.782773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.782962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.782996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.783178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.783212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.783315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.783348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.783594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.783626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.783815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.783848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.784048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.784082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.784189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.784222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.784419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.784452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.784690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.784721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.784842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.784876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.785083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.785117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.785307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.785339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.785530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.785561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.785667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.785699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.785824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.785856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.786053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.786085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.786203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.786235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.786366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.786399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.786636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.786668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.786844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.786876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.787071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.787104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.787285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.787319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.787440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.787472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.787598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.787631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.787840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.787872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.788054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.788088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.788194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.788226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.788401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.788432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.788622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.788654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.539 qpair failed and we were unable to recover it. 00:27:14.539 [2024-11-20 09:59:37.788789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.539 [2024-11-20 09:59:37.788822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.788928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.788972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.789143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.789175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.789354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.789386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.789574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.789606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.789825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.789857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.789978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.790011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.790125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.790157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.790259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.790291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.790412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.790443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.790669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.790701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.790873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.790904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.791154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.791187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.791359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.791390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.791577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.791609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.791715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.791748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.791945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.791988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.792253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.792286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.792404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.792436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.792604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.792636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.792840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.792873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.792995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.793028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.793148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.793181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.793444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.793477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.793644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.793676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.793874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.793907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.794031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.794065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.794195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.794228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.794339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.794371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.794495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.794527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.794738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.794771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.794972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.795011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.795201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.795233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.795353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.795385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.795586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.795619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.795757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.795790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.795999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.796032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.796204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.796236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.796361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.796393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.796585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.796618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.796844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.540 [2024-11-20 09:59:37.796876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.540 qpair failed and we were unable to recover it. 00:27:14.540 [2024-11-20 09:59:37.797056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-11-20 09:59:37.797090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-11-20 09:59:37.797266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-11-20 09:59:37.797299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-11-20 09:59:37.797471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-11-20 09:59:37.797504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-11-20 09:59:37.797619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-11-20 09:59:37.797652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-11-20 09:59:37.797760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-11-20 09:59:37.797793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-11-20 09:59:37.797978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.541 [2024-11-20 09:59:37.798011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.541 qpair failed and we were unable to recover it. 00:27:14.541 [2024-11-20 09:59:37.798274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.798308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.798565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.798598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.798732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.798765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.798943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.799006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.799195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.799228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.799339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.799371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.799491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.799523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.799661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.799693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.799867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.799900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.800023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.800056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.800241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.800274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.800414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.800448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.800626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.800659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.800920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.800962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.801140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.801173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.801292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.801323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.830 [2024-11-20 09:59:37.801586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.830 [2024-11-20 09:59:37.801618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.830 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.801805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.801839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.801956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.801991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.802105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.802137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.802318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.802351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.802487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.802519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.802685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.802716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.802906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.802940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.803158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.803196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.803366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.803399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.803584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.803617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.803827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.803860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.804047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.804081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.804214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.804245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.804424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.804457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.804673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.804705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.804943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.804983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.805193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.805225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.805419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.805452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.805570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.805601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.805780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.805811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.806049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.806083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.806194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.806226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.806361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.806393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.806568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.806600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.806707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.806739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.806910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.806941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.807202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.807234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.807415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.807447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.807616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.807648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.807819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.807850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.808021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.808054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.808235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.808267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.808401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.808433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.808606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.808639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.808919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.808959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.809177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.809210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.809411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.831 [2024-11-20 09:59:37.809444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.831 qpair failed and we were unable to recover it. 00:27:14.831 [2024-11-20 09:59:37.809638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.809670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.809800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.809833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.809961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.809994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.810187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.810219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.810414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.810445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.810617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.810649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.810913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.810955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.811196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.811228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.811349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.811382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.811613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.811645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.811784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.811822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.811930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.811973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.812156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.812188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.812417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.812448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.812563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.812596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.812724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.812757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.812871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.812903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.813031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.813064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.813183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.813215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.813340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.813372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.813611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.813643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.813845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.813877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.814091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.814125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.814243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.814275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.814459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.814492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.814757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.814788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.814893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.814926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.815055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.815088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.815192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.815224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.815407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.815439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.815620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.815652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.815893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.815925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.816129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.816161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.816288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.816321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.816439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.816471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.816586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.816617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.816791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.816824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.832 [2024-11-20 09:59:37.817068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.832 [2024-11-20 09:59:37.817102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.832 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.817313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.817345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.817534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.817566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.817776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.817808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.817945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.817989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.818183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.818216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.818385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.818419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.818594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.818625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.818805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.818837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.818967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.819002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.819130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.819162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.819348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.819380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.819644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.819676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.819912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.819959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.820162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.820194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.820418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.820450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.820690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.820722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.820966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.821000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.821216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.821249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.821431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.821464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.821643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.821674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.821793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.821827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.821935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.821995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.822258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.822290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.822475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.822506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.822624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.822656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.822912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.822945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.823153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.823187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.823302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.823335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.823443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.823475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.823646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.823677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.823884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.823917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.824108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.824141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.824262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.824293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.824469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.824502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.824633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.824665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.824924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.824967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.825145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.825177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.833 qpair failed and we were unable to recover it. 00:27:14.833 [2024-11-20 09:59:37.825349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.833 [2024-11-20 09:59:37.825382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.825563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.825596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.825802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.825834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.825996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.826031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.826246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.826278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.826489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.826521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.826780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.826813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.826997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.827031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.827230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.827263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.827476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.827507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.827620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.827652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.827822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.827855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.828093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.828126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.828257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.828289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.828483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.828514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.828750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.828789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.828972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.829004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.829196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.829228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.829418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.829451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.829643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.829675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.829883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.829916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.830073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.830107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.830287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.830320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.830449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.830481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.830661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.830694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.830864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.830896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.831147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.831180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.831441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.831472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.831681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.831714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.831837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.831870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.832056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.832090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.832271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.832304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.832489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.832521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.832707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.832739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.832855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.832888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.833074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.833107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.833233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.833266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.833525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.834 [2024-11-20 09:59:37.833556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.834 qpair failed and we were unable to recover it. 00:27:14.834 [2024-11-20 09:59:37.833755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.833787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.833975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.834008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.834122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.834155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.834399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.834431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.834555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.834587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.834765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.834796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.835000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.835032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.835234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.835266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.835484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.835516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.835639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.835671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.835941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.835984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.836165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.836197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.836391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.836423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.836660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.836692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.836800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.836831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.836999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.837033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.837163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.837196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.837332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.837368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.837542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.837574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.837830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.837862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.838040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.838074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.838256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.838289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.838400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.838433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.838556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.838588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.838781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.838813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.839051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.839085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.839213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.839245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.839485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.839517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.839634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.839666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.839856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.839888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.840016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.835 [2024-11-20 09:59:37.840050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.835 qpair failed and we were unable to recover it. 00:27:14.835 [2024-11-20 09:59:37.840284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.840316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.840516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.840548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.840681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.840713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.840899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.840932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.841153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.841185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.841315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.841347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.841518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.841550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.841734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.841768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.841935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.841978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.842159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.842191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.842310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.842341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.842561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.842595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.842895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.842926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.843127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.843160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.843398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.843429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.843601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.843632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.843826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.843857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.844051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.844085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.844269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.844301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.844513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.844546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.844680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.844713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.844893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.844925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.845121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.845154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.845272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.845304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.845435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.845468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.845650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.845682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.845800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.845838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.846026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.846059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.846232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.846263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.846456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.846488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.846687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.846718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.846835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.846868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.847126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.847158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.847341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.847373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.847544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.847576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.847676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.847709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.847834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.847865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.848097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.836 [2024-11-20 09:59:37.848131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.836 qpair failed and we were unable to recover it. 00:27:14.836 [2024-11-20 09:59:37.848366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.848399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.848636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.848667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.848848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.848880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.849071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.849105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.849230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.849262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.849378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.849410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.849595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.849627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.849810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.849843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.849967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.850001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.850186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.850219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.850416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.850448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.850724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.850757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.850870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.850903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.851177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.851211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.851397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.851429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.851609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.851643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.851881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.851914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.852110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.852144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.852330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.852362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.852530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.852562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.852796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.852829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.853065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.853100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.853283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.853315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.853495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.853528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.853743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.853776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.854038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.854072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.854332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.854364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.854466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.854499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.854765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.854801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.855010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.855043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.855216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.855248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.855373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.855404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.855650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.855682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.855817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.855850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.856057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.856090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.856215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.856246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.856430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.856462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.837 [2024-11-20 09:59:37.856635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.837 [2024-11-20 09:59:37.856666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.837 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.856802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.856834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.857005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.857040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.857153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.857184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.857376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.857408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.857584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.857617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.857794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.857825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.858039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.858072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.858194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.858226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.858394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.858426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.858621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.858654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.858857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.858889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.859125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.859159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.859335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.859368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.859510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.859542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.859717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.859750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.859938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.859981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.860159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.860191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.860431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.860504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.860636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.860672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.860929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.860976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.861113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.861146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.861271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.861304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.861540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.861574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.861758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.861790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.861977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.862010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.862166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.862200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.862327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.862359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.862530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.862563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.862739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.862772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.863037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.863071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.863261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.863303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.863515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.863547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.863809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.863843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.864034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.864069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.864308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.864341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.864577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.864609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.864714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.838 [2024-11-20 09:59:37.864747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.838 qpair failed and we were unable to recover it. 00:27:14.838 [2024-11-20 09:59:37.864924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.864965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.865089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.865121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.865380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.865412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.865616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.865649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.865932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.865971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.866153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.866185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.866309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.866342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.866549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.866583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.866828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.866861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.866977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.867011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.867138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.867170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.867359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.867392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.867654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.867687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.867925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.867973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.868178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.868211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.868422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.868456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.868669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.868702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.868916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.868958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.869135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.869168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.869305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.869337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.869519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.869553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.869673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.869704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.869910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.869943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.870135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.870168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.870303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.870335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.870526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.870559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.870697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.870730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.870915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.870959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.871251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.871284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.871541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.871573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.871747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.871780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.871971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.872006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.839 [2024-11-20 09:59:37.872135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.839 [2024-11-20 09:59:37.872167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.839 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.872354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.872393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.872515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.872547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.872674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.872706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.872967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.873002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.873212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.873245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.873432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.873466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.873645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.873677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.873853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.873886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.874125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.874158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.874296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.874329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.874522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.874555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.874660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.874692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.874821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.874854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.875115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.875148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.875335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.875367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.875541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.875574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.875691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.875723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.875853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.875887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.876078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.876113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.876234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.876267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.876383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.876415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.876545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.876579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.876691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.876724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.876913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.876945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.877128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.877162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.877283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.877315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.877601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.877633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.877845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.877878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.878067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.878101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.878308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.878340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.878472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.878505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.878766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.878798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.878921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.878964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.879144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.879176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.879368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.879401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.879578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.879611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.879850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.840 [2024-11-20 09:59:37.879882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.840 qpair failed and we were unable to recover it. 00:27:14.840 [2024-11-20 09:59:37.880013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.880048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.880162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.880195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.880317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.880350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.880542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.880579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.880708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.880740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.880916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.880958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.881129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.881161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.881302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.881335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.881440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.881474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.881608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.881640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.881809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.881841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.882009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.882043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.882162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.882194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.882317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.882349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.882586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.882619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.882723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.882755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.882924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.882962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.883163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.883196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.883458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.883490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.883615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.883646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.883769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.883802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.884037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.884072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.884255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.884288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.884458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.884491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.884694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.884726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.884977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.885010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.885136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.885168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.885295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.885328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.885519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.885551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.885787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.885820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.885940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.885984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.886189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.886222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.886331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.886364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.886471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.886502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.886695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.886728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.886846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.886879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.887118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.887153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.841 [2024-11-20 09:59:37.887337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.841 [2024-11-20 09:59:37.887370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.841 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.887584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.887616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.887787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.887819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.887945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.887988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.888226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.888259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.888471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.888504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.888689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.888728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.888913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.888954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.889084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.889117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.889237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.889269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.889401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.889433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.889609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.889642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.889854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.889885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.890007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.890041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.890316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.890348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.890531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.890563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.890744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.890778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.890970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.891005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.891194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.891226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.891491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.891523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.891738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.891771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.891967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.892002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.892212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.892246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.892465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.892498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.892621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.892654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.892832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.892864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.893105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.893138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.893328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.893362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.893566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.893598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.893835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.893866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.893983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.894017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.894210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.894243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.894404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.894438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.894685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.894758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.895071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.895112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.895373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.895405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.895599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.895631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.895808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.895841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.842 [2024-11-20 09:59:37.896032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.842 [2024-11-20 09:59:37.896066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.842 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.896252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.896284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.896543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.896576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.896824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.896856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.897118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.897151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.897436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.897469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.897744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.897777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.898015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.898048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.898238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.898269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.898483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.898516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.898692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.898723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.898854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.898884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.899068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.899100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.899273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.899304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.899487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.899519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.899628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.899659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.899920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.899965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.900139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.900170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.900297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.900329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.900505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.900537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.900796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.900828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.901098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.901131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.901312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.901350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.901614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.901646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.901831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.901863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.901985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.902018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.902283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.902315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.902569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.902600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.902853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.902884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.903008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.903042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.903234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.903265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.903416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.903448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.903630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.903663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.903853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.903884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.904065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.904098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.904290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.904322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.904456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.904487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.904670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.904702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.843 [2024-11-20 09:59:37.904895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.843 [2024-11-20 09:59:37.904926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.843 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.905132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.905165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.905405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.905436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.905608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.905641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.905861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.905893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.906165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.906199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.906465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.906497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.906676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.906708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.906944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.906988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.907162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.907193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3072143 Killed "${NVMF_APP[@]}" "$@" 00:27:14.844 [2024-11-20 09:59:37.907428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.907460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.907664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.907697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.907932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:14.844 [2024-11-20 09:59:37.907974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.908156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.908189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:14.844 [2024-11-20 09:59:37.908400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.908433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.908618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.908650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.844 addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.908836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.908867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.844 [2024-11-20 09:59:37.909108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.909143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.844 [2024-11-20 09:59:37.909377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.909410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.909651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.909680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.909849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.909879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.910072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.910103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.910370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.910404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.910525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.910554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.910788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.910817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.911078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.911108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.911347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.911378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.911615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.911645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.911882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.911911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.912129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.912161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.912366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.912396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.844 [2024-11-20 09:59:37.912577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.844 [2024-11-20 09:59:37.912608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.844 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.912745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.912774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.912966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.912999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.913208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.913242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.913506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.913539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.913880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.913962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.914161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.914198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.914388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.914419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.914548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.914580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.914764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.914797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.915049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.915081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.915217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.915249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.915506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.915537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.915778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.915810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3072868 00:27:14.845 [2024-11-20 09:59:37.916049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.916087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.916259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.916294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3072868 00:27:14.845 [2024-11-20 09:59:37.916468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:14.845 [2024-11-20 09:59:37.916505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3072868 ']' 00:27:14.845 [2024-11-20 09:59:37.916745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.916780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.916982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.917018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.917148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.917182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.845 [2024-11-20 09:59:37.917335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.917370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.917560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.917597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.845 [2024-11-20 09:59:37.917836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.917872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.845 [2024-11-20 09:59:37.918063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.918099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 09:59:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:14.845 [2024-11-20 09:59:37.918240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.918276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.918456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.918492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.918632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.918665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.918894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.918928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.919249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.919282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.919400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.919434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.919554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.919587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.919714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.919748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.919926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.919968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.920117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.845 [2024-11-20 09:59:37.920154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.845 qpair failed and we were unable to recover it. 00:27:14.845 [2024-11-20 09:59:37.920288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.920321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.920462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.920495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.920768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.920802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.920940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.920987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.921230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.921265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.921459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.921493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.921734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.921773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.921968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.922003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.922189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.922224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.922452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.922523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.922649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.922685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.922971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.923007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.923126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.923160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.923356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.923390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.923593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.923626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.923825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.923860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.924103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.924137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.924273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.924305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.924486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.924519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.924642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.924676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.924871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.924906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.925174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.925210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.925403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.925437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.925681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.925713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.925929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.925973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.926090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.926124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.926317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.926350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.926489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.926522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.926716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.926751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.926877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.926911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.927142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.927178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.927358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.927391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.927604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.927639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.927893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.927982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.928208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.928247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.928495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.928528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.928715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.928748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.928968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.846 [2024-11-20 09:59:37.929004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.846 qpair failed and we were unable to recover it. 00:27:14.846 [2024-11-20 09:59:37.929262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.929296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.929435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.929467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.929670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.929702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.929841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.929874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.930113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.930147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.930388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.930420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.930621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.930654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.930791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.930824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.931014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.931056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.931322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.931356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.931474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.931507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.931618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.931650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.931870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.931902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.932110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.932145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.932335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.932368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.932481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.932513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.932819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.932851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.933112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.933146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.933345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.933381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.933586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.933618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.933854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.933886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.934029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.934063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.934265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.934298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.934488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.934522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.934719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.934753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.934887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.934922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.935138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.935174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.935313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.935347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.935613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.935646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.935903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.935934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.936197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.936240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.936429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.936462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.936598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.936633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.936835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.936867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.937079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.937114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.937364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.937440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.937651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.937689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.847 qpair failed and we were unable to recover it. 00:27:14.847 [2024-11-20 09:59:37.937900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.847 [2024-11-20 09:59:37.937934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.938143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.938176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.938358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.938392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.938594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.938626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.938746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.938779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.939070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.939108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.939298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.939330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.939462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.939495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.939675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.939709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.939989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.940024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.940201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.940235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.940427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.940459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.940731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.940765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.940886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.940917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.941177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.941210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.941400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.941434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.941560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.941593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.941809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.941843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.942105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.942139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.942333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.942366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.942570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.942604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.942730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.942762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.942964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.942999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.943300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.943334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.943519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.943553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.943748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.943787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.943904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.943938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.944070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.944102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.944293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.944326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.944503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.944536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.944659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.944691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.944822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.944854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.945034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.945068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.945319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.945352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.945535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.945569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.945838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.848 [2024-11-20 09:59:37.945870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.848 qpair failed and we were unable to recover it. 00:27:14.848 [2024-11-20 09:59:37.945996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.946029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.946229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.946265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.946450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.946483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.946597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.946629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.946858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.946931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.947108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.947147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.947275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.947311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.947496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.947530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.947650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.947684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.947867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.947900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.948124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.948159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.948409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.948443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.948680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.948713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.948901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.948934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.949122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.949154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.949361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.949393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.949511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.949554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.949687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.949720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.949907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.949940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.950131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.950164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.950297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.950330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.950508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.950541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.950770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.950803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.950923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.950985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.951112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.951147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.951329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.951363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.951543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.951585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.951761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.951795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.952046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.952082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.952262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.952296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.952497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.952530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.952691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.952723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.952847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.952880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.953069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.953103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.953245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.953279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.953542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.849 [2024-11-20 09:59:37.953574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.849 qpair failed and we were unable to recover it. 00:27:14.849 [2024-11-20 09:59:37.953767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.953800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.953920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.953963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.954094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.954128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.954254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.954288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.954552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.954585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.954786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.954820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.955120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.955154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.955290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.955324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.955428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.955459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.955653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.955687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.955823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.955857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.955983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.956016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.956193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.956226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.956363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.956397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.956575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.956609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.956787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.956820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.956960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.956996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.957206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.957238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.957373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.957407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.957531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.957564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.957757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.957795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.957912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.957972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.958107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.958140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.958321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.958355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.958480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.958514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.958625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.958659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.958775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.958808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.958913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.958945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.959166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.959200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.959382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.959415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.959660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.959694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.959822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.959978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.960014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.960134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.960168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.960392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.960425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.960624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.960656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.960778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.960811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.961007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.850 [2024-11-20 09:59:37.961042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.850 qpair failed and we were unable to recover it. 00:27:14.850 [2024-11-20 09:59:37.961157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.961190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.961335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.961368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.961496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.961530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.961641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.961672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.961858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.961891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.962008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.962042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.962168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.962202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.962332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.962363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.962504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.962537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.962676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.962710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.962824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.962857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.963041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.963075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.963213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.963246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.963369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.963402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.963528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.963561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.963681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.963715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.963889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.963923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.964056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.964089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.964195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.964228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.964333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.964366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.964552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.964585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.964709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.964743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.964921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.964973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.965185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.965221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.965399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.965432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.965548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.965582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.965686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.965725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.965863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.965897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.966184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.966223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.966361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.966395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.966607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.966640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.966842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.966875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.966994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.967029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.967163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.967197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.967305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.967348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 qpair failed and we were unable to recover it. 00:27:14.851 [2024-11-20 09:59:37.967520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.851 [2024-11-20 09:59:37.967526] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:27:14.851 [2024-11-20 09:59:37.967553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.851 [2024-11-20 09:59:37.967579] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:qpair failed and we were unable to recover it. 00:27:14.851 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.852 [2024-11-20 09:59:37.967627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e9af0 (9): Bad file descriptor 00:27:14.852 [2024-11-20 09:59:37.967903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.968016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.968244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.968280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.968470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.968507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.968799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.968832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.968975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.969012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.969153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.969186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.969367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.969401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.969596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.969628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.969744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.969775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.969889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.969924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.970124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.970158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.970333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.970365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.970561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.970593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.970707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.970740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.970981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.971018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.971202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.971236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.971362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.971395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.971588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.971621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.971755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.971790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.971906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.971939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.972064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.972097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.972204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.972237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.972508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.972540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.972652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.972683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.972805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.972839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.972967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.973003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.973172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.973205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.973330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.973362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.973479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.973512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.973625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.973659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.974250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.974290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.974491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.974525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.974717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.974752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.974995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.975032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.852 [2024-11-20 09:59:37.975152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.852 [2024-11-20 09:59:37.975186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.852 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.975395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.975429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.975554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.975586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.975757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.975791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.975991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.976026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.976297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.976330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.976526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.976560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.976760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.976792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.976977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.977011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.977218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.977253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.977430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.977461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.977578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.977611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.977726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.977759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.977871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.977905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.978043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.978076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.978199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.978231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.978337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.978391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.978575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.978608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.978785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.978824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.978932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.978980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.979218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.979252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.979436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.979471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.979588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.979620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.979797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.979829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.979935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.979983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.980179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.980211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.980333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.980366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.980611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.980646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.980769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.980801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.980908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.980942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.981073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.981105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.981219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.853 [2024-11-20 09:59:37.981254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.853 qpair failed and we were unable to recover it. 00:27:14.853 [2024-11-20 09:59:37.981466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.981501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.981762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.981795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.981972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.982007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.982178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.982211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.982313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.982346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.982524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.982557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.982832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.982865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.983103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.983137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.983312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.983345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.983522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.983555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.983689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.983722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.983829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.983863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.983974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.984007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.984180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.984213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.984394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.984427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.984608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.984641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.984830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.984863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.985122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.985158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.985276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.985308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.985414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.985446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.985567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.985602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.985727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.985758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.985999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.986035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.986159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.986191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.986464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.986497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.986600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.986632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.986754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.986786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.986906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.986944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.987081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.987113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.987296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.987330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.987571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.987605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.987715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.987750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.987871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.987903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.988014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.988045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.988313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.988347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.854 [2024-11-20 09:59:37.988474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.854 [2024-11-20 09:59:37.988505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.854 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.988753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.988786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.988990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.989025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.989221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.989254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.989457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.989492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.989614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.989647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.989760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.989793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.989971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.990005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.990136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.990169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.990310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.990344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.990583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.990618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.990874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.990909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.991047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.991081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.991265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.991298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.991481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.991512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.991647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.991679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.991849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.991881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.991994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.992028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.992136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.992168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.992339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.992378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.992620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.992654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.992777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.992809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.992933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.992980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.993086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.993119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.993292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.993325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.993504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.993536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.993733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.993767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.993962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.993997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.994207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.994241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.994417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.994450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.994627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.994660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.994831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.994863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.995049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.995084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.995273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.995307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.995419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.995451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.995717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.995750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.995922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.995964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.996084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.855 [2024-11-20 09:59:37.996115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.855 qpair failed and we were unable to recover it. 00:27:14.855 [2024-11-20 09:59:37.996294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.996329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.996466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.996504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.996673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.996705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.996824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.996856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.996966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.997001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.997099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.997133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.997243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.997276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.997536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.997570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.997752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.997786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.997975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.998010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.998212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.998251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.998377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.998410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.998523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.998557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.998678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.998711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.998963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.998997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.999204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.999237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.999421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.999452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.999567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.999600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.999717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:37.999748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:37.999979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.000012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.000116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.000148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.000268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.000299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.000496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.000532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.000645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.000677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.000857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.000891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.001151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.001185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.001298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.001331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.001506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.001538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.001780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.001815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.001937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.001983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.002192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.002225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.002339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.002371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.002484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.002517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.002757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.002791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.856 [2024-11-20 09:59:38.002913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.856 [2024-11-20 09:59:38.002946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.856 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.003100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.003134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.003267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.003299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.003470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.003503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.003634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.003667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.003786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.003817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.003936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.003980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.004170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.004203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.004318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.004351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.004458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.004491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.004673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.004707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.004871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.004903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.005107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.005139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.005323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.005355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.005527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.005562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.005738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.005777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.006015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.006051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.006231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.006263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.006454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.006487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.006614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.006646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.006765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.006798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.006928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.006969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.007162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.007194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.007314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.007348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.007579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.007612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.007851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.007883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.008009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.008043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.008173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.008204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.008304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.008337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.008592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.008666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.008875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.008913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.009105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.009179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.009329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.009366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.009502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.009536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.009667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.009701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.009900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.009933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.010065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.010099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.010273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.857 [2024-11-20 09:59:38.010306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.857 qpair failed and we were unable to recover it. 00:27:14.857 [2024-11-20 09:59:38.011746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.011800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.012008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.012045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.012229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.012263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.012390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.012424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.012554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.012594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.012841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.012874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.012994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.013029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.013140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.013173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.013293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.013325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.013441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.013475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.013675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.013709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.013807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.013839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.015186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.015240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.015529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.015563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.015691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.015726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.015928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.015973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.016154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.016186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.016378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.016410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.016548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.016581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.016748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.016780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.016915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.016960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.017142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.017177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.017363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.017396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.017519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.017552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.017734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.017767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.017886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.017920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.018114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.018146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.018319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.018352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.018472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.018506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.018631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.858 [2024-11-20 09:59:38.018664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.858 qpair failed and we were unable to recover it. 00:27:14.858 [2024-11-20 09:59:38.018837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.018870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.018990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.019027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.019148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.019179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.019296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.019329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.019448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.019481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.019672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.019705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.019834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.019867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.019993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.020027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.020213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.020246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.020466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.020499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.020611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.020643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.020841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.020873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.020996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.021030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.021169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.021202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.021331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.021369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.021476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.021509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.021634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.021668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.021842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.021874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.022055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.022089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.022281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.022313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.022496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.022529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.022771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.022804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.022924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.022964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.023071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.023106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.023209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.023242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.023376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.023408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.023535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.023568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.023755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.023788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.023910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.023943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.024078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.024110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.024230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.024262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.024441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.024474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.024594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.024627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.024810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.024844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.025034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.025069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.025174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.025205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.025337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.859 [2024-11-20 09:59:38.025371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.859 qpair failed and we were unable to recover it. 00:27:14.859 [2024-11-20 09:59:38.025484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.025517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.025692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.025723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.025836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.025867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.026048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.026081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.026309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.026383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.026588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.026625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.026769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.026802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.026942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.026988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.027114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.027147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.027252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.027282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.027396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.027429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.027612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.027644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.027769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.027801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.027997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.028032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.028136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.028169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.028308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.028341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.029712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.029765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.029999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.030045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.030178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.030212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.031960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.032020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.032244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.032278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.032404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.032437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.033736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.033787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.034038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.034073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.034346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.034380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.034566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.034599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.034729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.034761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.034887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.034920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.035036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.035070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.035214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.035247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.035353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.035385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.035578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.035612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.035716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.035750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.035873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.035906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.036172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.036205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.036321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.036364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.036483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.036516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.860 [2024-11-20 09:59:38.036693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.860 [2024-11-20 09:59:38.036725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.860 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.036910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.036942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.037078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.037111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.037249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.037281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.037470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.037504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.037627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.037660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.037850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.037883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.038020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.038076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.038191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.038223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.038327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.038359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.038602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.038635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.038825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.038858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.038975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.039008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.039127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.039161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.039277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.039309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.039415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.039446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.039635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.039667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.039792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.039824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.039959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.039993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.040100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.040133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.040341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.040381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.040502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.040534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.040645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.040678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.040787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.040816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.040912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.040939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.041117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.041145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.041243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.041271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.041449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.041476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.041640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.041668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.041775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.041803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.041923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.041960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.042075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.042104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.042206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.042234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.042335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.042363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.042543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.042572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.042687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.042714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.042829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.042857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.042987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.043018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.861 qpair failed and we were unable to recover it. 00:27:14.861 [2024-11-20 09:59:38.043172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.861 [2024-11-20 09:59:38.043201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.043361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.043388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.043488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.043516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.043627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.043655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.043815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.043843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.044015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.044044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.044214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.044242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.044412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.044441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.044608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.044636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.044795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.044867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.045099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.045138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.045270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.045305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.045434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.045467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.045655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.045689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.045801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.045834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.046082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.046114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.046231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.046261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.046450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.046478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.046571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.046599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.046772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.046800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.047005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.047036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.047143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.047172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.047298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.047326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.047449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.047478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.047660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.047688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.048846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.048891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.049111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.049142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.049333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.049361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.049592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.049638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.049879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.049912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.050030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.050065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.050196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.050230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.050420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.050452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.050570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.050603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.050774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.050808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.050918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.050977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.051110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.051143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.051311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.862 [2024-11-20 09:59:38.051344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.862 qpair failed and we were unable to recover it. 00:27:14.862 [2024-11-20 09:59:38.051484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.051516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.051657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.051690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.051827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.051860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.051993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.052028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.052221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.052254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.052502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.052536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.052721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.052754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.052930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.052972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.053094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.053129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.053252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.053285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.053398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.053430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.053539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.053579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.053715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.053747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.053873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.053907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.054100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.054134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.054230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.863 [2024-11-20 09:59:38.054251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.054282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.054392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.054422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.054537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.054570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.054742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.054775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.054900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.054932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.055129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.055163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.055406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.055439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.055625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.055658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.055784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.055824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.055990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.056031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.056154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.056186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.056300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.056332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.056458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.056493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.056597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.056631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.056744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.056777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.056918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.056958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.057072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.863 [2024-11-20 09:59:38.057104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.863 qpair failed and we were unable to recover it. 00:27:14.863 [2024-11-20 09:59:38.057277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.057310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.057493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.057527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.057653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.057685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.057930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.057971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.058150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.058185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.058305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.058337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.058526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.058560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.058764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.058805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.059012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.059045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.059168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.059202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.059418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.059451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.059587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.059621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.059805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.059838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.060016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.060051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.060245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.060278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.060455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.060488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.060612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.060648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.060848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.060881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.061010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.061045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.061174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.061216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.061400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.061434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.061565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.061599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.061763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.061797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.061933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.061978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.062281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.062316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.062441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.062474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.062674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.062709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.062891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.062924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.063047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.063081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.063191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.063224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.063420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.063455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.063633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.063667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.063846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.063887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.064026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.064062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.064249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.064284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.064397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.064428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.064555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.864 [2024-11-20 09:59:38.064589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.864 qpair failed and we were unable to recover it. 00:27:14.864 [2024-11-20 09:59:38.064767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.064802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.064923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.064967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.065164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.065200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.065327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.065362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.065537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.065573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.065687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.065726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.065911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.065944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.066076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.066109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.066223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.066256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.066384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.066418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.066553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.066586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.066704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.066737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.066840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.066873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.066997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.067031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.067209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.067242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.067441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.067474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.067651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.067684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.069084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.069141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.069387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.069422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.069695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.069728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.069848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.069879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.070010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.070044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.070204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.070258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.070439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.070474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.070666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.070700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.070826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.070858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.070995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.071031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.071213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.071247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.071416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.071449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.071634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.071666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.071849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.071882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.072069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.072105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.072278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.072311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.072483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.072523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.072645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.072684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.072868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.072901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.073114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.073148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.073277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.865 [2024-11-20 09:59:38.073310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.865 qpair failed and we were unable to recover it. 00:27:14.865 [2024-11-20 09:59:38.073457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.073490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.073612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.073644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.073816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.073848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.074031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.074066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.074247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.074281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.074382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.074415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.074593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.074667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.074798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.074837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.074986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.075023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.075233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.075268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.075446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.075481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.075598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.075640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.075925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.075971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.076188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.076222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.076340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.076373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.076505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.076538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.076666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.076700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.076883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.076917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.077062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.077100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.077296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.077329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.077576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.077610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.077726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.077761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.077879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.077912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.078044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.078079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.078320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.078353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.078486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.078519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.079892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.079959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.080127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.080162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.080341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.080374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.080552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.080586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.080694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.080726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.080852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.080886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.081043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.081077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.081214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.081246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.081365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.081397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.081538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.081570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.081690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.081723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.081844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.866 [2024-11-20 09:59:38.081878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.866 qpair failed and we were unable to recover it. 00:27:14.866 [2024-11-20 09:59:38.082005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.082045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.082271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.082306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.082441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.082473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.083785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.083838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.084112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.084150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.084332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.084366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.084473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.084507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.084749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.084783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.084967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.085002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.085112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.085144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.085341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.085374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.085492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.085524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.085693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.085725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.085912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.085944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.086096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.086130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.086256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.086287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.086405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.086437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.086609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.086641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.086882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.086915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.087060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.087098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.087274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.087308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.087484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.087518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.087626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.087660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.087839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.087873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.087995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.088030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.088137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.088170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.088303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.088337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.088536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.088578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.088687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.088720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.088911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.088945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.089150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.089184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.089367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.089399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.089520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.867 [2024-11-20 09:59:38.089553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.867 qpair failed and we were unable to recover it. 00:27:14.867 [2024-11-20 09:59:38.089806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.089839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.089971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.090005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.090186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.090220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.090408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.090441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.090570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.090604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.090718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.090752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.090873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.090906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.091083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.091116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.091233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.091267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.091394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.091427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.091599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.091634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.091818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.091851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.091980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.092015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.092198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.092231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.092343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.092375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.092548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.092582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.092796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.092830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.092957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.092992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.093108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.093141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.094601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.094656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.094870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.094905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.095106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.095141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.095264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.095296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.095533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.095568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.095751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.095784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.095913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.095962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.096170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.868 [2024-11-20 09:59:38.096198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.868 [2024-11-20 09:59:38.096206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.868 [2024-11-20 09:59:38.096215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.868 [2024-11-20 09:59:38.096222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.868 [2024-11-20 09:59:38.096229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.096261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.096433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.096465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.096651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.096682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.096857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.096891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.097037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.097072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.097278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.097311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.097481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.097521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.097654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.097688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.097895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.868 [2024-11-20 09:59:38.097929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.868 qpair failed and we were unable to recover it. 00:27:14.868 [2024-11-20 09:59:38.097895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:14.868 [2024-11-20 09:59:38.098003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:14.868 [2024-11-20 09:59:38.098109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:14.868 [2024-11-20 09:59:38.098178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.098212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 [2024-11-20 09:59:38.098110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.098317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.098347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.098518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.098549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.098682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.098714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.098930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.098973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.099089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.099123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.099253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.099286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.099471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.099506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.099633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.099666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.099837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.099876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.100085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.100121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.100249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.100283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.100391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.100425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.100545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.100579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.100764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.100798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.100924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.100968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.101152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.101185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.101319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.101352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.101480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.101512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.101623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.101657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.101867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.101900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.102031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.102066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.102355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.102388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.102595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.102629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.102808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.102841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.102966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.103000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.103179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.103212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.103345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.103380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.103504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.103536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.103651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.103684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.103869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.103902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.104056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.104091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.104204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.104238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.104418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.104451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.104632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.104665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.104772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.104806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.104991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.869 [2024-11-20 09:59:38.105069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:14.869 qpair failed and we were unable to recover it. 00:27:14.869 [2024-11-20 09:59:38.105366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.105414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.105652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.105688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.105879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.105913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.106099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.106135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.106256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.106289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.106421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.106454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.106590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.106624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.106795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.106828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.107005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.107041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.107222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.107255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.107381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.107415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.107533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.107565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.107688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.107721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.107899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.107932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.108058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.108092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.108272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.108306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.108420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.108453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.108639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.108671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.108856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.108889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.109080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.109115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.109225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.109258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.109368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.109402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.109585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.109619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.109732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.109764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.110007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.110041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.110176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.110209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.110337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.110372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.110541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.110575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.110766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.110799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.110977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.111012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.111125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.111157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.111290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.111323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.111500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.111532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.111728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.111762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.111872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.111905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.112076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.112111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.112385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.112418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.112705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.112737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.112926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.112966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.113157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.870 [2024-11-20 09:59:38.113196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.870 qpair failed and we were unable to recover it. 00:27:14.870 [2024-11-20 09:59:38.113335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.113369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.113634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.113668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.113877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.113910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.114093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.114126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.114298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.114331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.114602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.114635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.114874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.114906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.115141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.115175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.115458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.115491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.115675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.115708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.115859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.115892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.116110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.116143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.116275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.116308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.116504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.116539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.116762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.116794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.117012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.117048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.117185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.117219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.117388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.117423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.117640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.117673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.117824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.117858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.118031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.118067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.118238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.118271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.118451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.118484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.118634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.118668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.118808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.118840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.119025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.119060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.119209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.119244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.119433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.119467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.119665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.119699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.119882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.119916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.120138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.120191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.120412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.120448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.120639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.120673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.120848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.120882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.121096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.121133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.121268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.121300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.121491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.121525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.121787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.121820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.122008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.122044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.122251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.122294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.871 [2024-11-20 09:59:38.122514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.871 [2024-11-20 09:59:38.122548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.871 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.122780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.122813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.122999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.123033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.123206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.123240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.123365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.123397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.123605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.123639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.123851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.123883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.124024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.124059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.124187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.124219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.124355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.124388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.124534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.124568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.124820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.124854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.125033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.125068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.125246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.125282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.125551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.125585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.125722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.125756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.125890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.125924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.126043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.126077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.126180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.126212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.126409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.126442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.126588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.126621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.126865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.126898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.127044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.127080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.127224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.127257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.127378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.127411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.127583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.127616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.127723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.127754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.127888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.127923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.128112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.128146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.128369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.128403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.128541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.128572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.128689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.128722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.128833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.128866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.129010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.129045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.129232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.129268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.872 [2024-11-20 09:59:38.129373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.872 [2024-11-20 09:59:38.129406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.872 qpair failed and we were unable to recover it. 00:27:14.873 [2024-11-20 09:59:38.129670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.873 [2024-11-20 09:59:38.129704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.873 qpair failed and we were unable to recover it. 00:27:14.873 [2024-11-20 09:59:38.129893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.873 [2024-11-20 09:59:38.129927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.873 qpair failed and we were unable to recover it. 00:27:14.873 [2024-11-20 09:59:38.130049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.873 [2024-11-20 09:59:38.130083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.873 qpair failed and we were unable to recover it. 00:27:14.873 [2024-11-20 09:59:38.130258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.873 [2024-11-20 09:59:38.130291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.873 qpair failed and we were unable to recover it. 00:27:14.873 [2024-11-20 09:59:38.130437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.873 [2024-11-20 09:59:38.130477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:14.873 qpair failed and we were unable to recover it. 00:27:15.143 [2024-11-20 09:59:38.130676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.143 [2024-11-20 09:59:38.130709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.143 qpair failed and we were unable to recover it. 00:27:15.143 [2024-11-20 09:59:38.130959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.130996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.131251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.131285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.131464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.131496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.131756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.131789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.131962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.131996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.132187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.132219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.132409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.132442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.132642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.132675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.132871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.132905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.133107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.133142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.133275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.133307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.133497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.133530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.133716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.133750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.133927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.133983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.134185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.134220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.134461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.134495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.134755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.134788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.134970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.135005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.135175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.135209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.135327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.135359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.135619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.135652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.135834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.135866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.136044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.136077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.136252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.136285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.136419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.136451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.136628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.136665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.136858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.136891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.137100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.137133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.137377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.137410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.137719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.137752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.137871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.137904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.138175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.138209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.138402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.138436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.138660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.138694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.138810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.138845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.138987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.139021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.144 [2024-11-20 09:59:38.139277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.144 [2024-11-20 09:59:38.139312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.144 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.139420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.139454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.139690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.139725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.139877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.139934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.140224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.140258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.140444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.140478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.140736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.140770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.141039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.141075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.141348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.141382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.141576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.141610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.141797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.141831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.142026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.142061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.142272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.142306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.142504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.142536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.142715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.142751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.142868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.142901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.143107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.143151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.143266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.143300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.143435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.143470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.143599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.143633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.143856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.143890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.144170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.144205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.144468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.144503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.144622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.144656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.144773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.144807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.144919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.144964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.145206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.145241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.145369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.145404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.145593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.145629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.145824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.145859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.146051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.146089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.146222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.146258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.146542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.146577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.146801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.146835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.147053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.147091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.147288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.147327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.147476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.147512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.147760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.145 [2024-11-20 09:59:38.147797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.145 qpair failed and we were unable to recover it. 00:27:15.145 [2024-11-20 09:59:38.148074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.148109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.148315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.148352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.148526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.148559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.148800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.148834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.149087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.149123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.149267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.149332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.149501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.149550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.149707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.149741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.149875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.149909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.150092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.150126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.150361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.150394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.150649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.150682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.150922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.150979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.151109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.151141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.151270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.151304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.151505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.151538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.151767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.151799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.151990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.152025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.152224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.152266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.152447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.152481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.152652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.152685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.152946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.152987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.153165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.153198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.153395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.153429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.153569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.153601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.153783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.153815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.154028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.154064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.154247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.154280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.154414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.154448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.154580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.154613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.154833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.154866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.155001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.155035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.155165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.155198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.155317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.155349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.155480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.155512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.155693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.155726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.155971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.156006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.156184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.146 [2024-11-20 09:59:38.156217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.146 qpair failed and we were unable to recover it. 00:27:15.146 [2024-11-20 09:59:38.156349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.156598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.156632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.156824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.156857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.156986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.157020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.157209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.157243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.157373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.157407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.157601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.157633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.157897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.157943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.158225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.158259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.158477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.158511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.158750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.158784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.158979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.159013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.159129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.159162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.159360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.159393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.159655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.159689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.159861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.159894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.160101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.160135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.160273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.160308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.160495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.160528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.160674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.160708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.160959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.161005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.161145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.161178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.161319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.161353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.161566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.161600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.161720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.161754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.161933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.161981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.162241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.162274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.162420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.162454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.162712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.162746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.162988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.163024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.163200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.163233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.163427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.163460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.163693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.163726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.163909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.163942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.164201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.164235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.164369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.147 [2024-11-20 09:59:38.164402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.147 qpair failed and we were unable to recover it. 00:27:15.147 [2024-11-20 09:59:38.164594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.164627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.164760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.164794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.165010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.165045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.165232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.165266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.165549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.165582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.165767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.165800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.165998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.166033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.166255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.166287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.166572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.166605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.166780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.166813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.167010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.167045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.167257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.167305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.167539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.167574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.167847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.167881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.168103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.168138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.168325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.168359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.168538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.168572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.168753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.168787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.168921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.168963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.169082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.169116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.169295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.169329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.169456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.169489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.169667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.169700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.169908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.169942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.170118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.170151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.170301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.170334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.170466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.170499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.170699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.170733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.170865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.170898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.171089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.171125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.171310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.171343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.171580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.171613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.148 qpair failed and we were unable to recover it. 00:27:15.148 [2024-11-20 09:59:38.171836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.148 [2024-11-20 09:59:38.171870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.172131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.172165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.172355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.172388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.172563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.172596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.172806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.172839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.173015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.173050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.173263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.173302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.173573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.173607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.173870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.173904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.174131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.174166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.174353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.174385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.174666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.174697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.174871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.174904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.175105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.175140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.175323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.175356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.175574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.175608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.175801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.175834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.176015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.176049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.176285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.176318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.176540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.176573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.176777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.176811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.177091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.177125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.177331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.177364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.177621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.177654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.177842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.177875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.178119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.178153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.178269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.178303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.178485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.178518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.178638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.178671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.178800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.178833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.179023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.179057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.179194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.179227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.179368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.179402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.179516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.179548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.179729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.179763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.179941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.179985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.180244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.149 [2024-11-20 09:59:38.180277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.149 qpair failed and we were unable to recover it. 00:27:15.149 [2024-11-20 09:59:38.180411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.180444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.180632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.180664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.180898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.180930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.181135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.181168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.181407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.181439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.181607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.181640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.181820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.181852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.182111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.182145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.182327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.182358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.182495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.182528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.182753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.182800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.183027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.183065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.183264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.183297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.183585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.183618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.183750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.183784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.183999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.184034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.184175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.184209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.184470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.184503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.184694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.184727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.184932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.184975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.185215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.185249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.185435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.185468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.185688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.185721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.185907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.185939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.186101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.186135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.186318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.186350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.186492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.186525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.186702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.186735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.186871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.186904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.187115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.187149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.187312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.187346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.187570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.187603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.187786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.187819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.187991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.188026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.188274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.188307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.188563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.188597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.188771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.188804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.150 [2024-11-20 09:59:38.188933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.150 [2024-11-20 09:59:38.188977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.150 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.189115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.189149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.189276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.189309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.189434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.189467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.189655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.189688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.189872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.189905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.190106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.190141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.190402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.190435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.190570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.190603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.190710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.190744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.191018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.191052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.191189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.191221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.191474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.191506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.191646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.191684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.191957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.191992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.192175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.192206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.192396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.192430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.192626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.192659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.192842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.192874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.193140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.193175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.193306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.193338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.193466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.193498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.193609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.193642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.193815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.193847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.194059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.194094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.194296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.194329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.194520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.194553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.194816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.194850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.194973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.195007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.195192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.195225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.195362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.195395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.195577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.195609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.195828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.195861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.196055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.196089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.196232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.196264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.196522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.196555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.196744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.196776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.151 [2024-11-20 09:59:38.196988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.151 [2024-11-20 09:59:38.197022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.151 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.197213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.197247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.197459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.197491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.197764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.197797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.198005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.198039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.198304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.198336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.198512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.198544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.198686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.198718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.198833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.198866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.199003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.199037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.199136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.199167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.199375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.199409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.199542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.199575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.199710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.199742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.199929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.199970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.200142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.200175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.200306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.200351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.200475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.200508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.200694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.200726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.200917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.200961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.201090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.201122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.201232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.201266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.201444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.201477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.201588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.201620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.201804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.201838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.202094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.202127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.202232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.202265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.202386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.202417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.202588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.202622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.202727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.202760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.152 [2024-11-20 09:59:38.202902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.202936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.203058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.203091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.203204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:15.152 [2024-11-20 09:59:38.203236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.203349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.203382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.203551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.152 [2024-11-20 09:59:38.203584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 [2024-11-20 09:59:38.203710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.203742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.152 qpair failed and we were unable to recover it. 00:27:15.152 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.152 [2024-11-20 09:59:38.203862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.152 [2024-11-20 09:59:38.203896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.204023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.204057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.153 [2024-11-20 09:59:38.204247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.204280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.204409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.204441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.204612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.204646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.204781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.204815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.204934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.204978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.205098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.205130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.205312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.205346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.205468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.205500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.205755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.205786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.205993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.206029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.206221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.206255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.206433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.206467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.206664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.206697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.206959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.206995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.207176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.207209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.207407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.207440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.207690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.207740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.207943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.207993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.208185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.208219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.208398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.208432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.208625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.208659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.208833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.208866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.209000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.209034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.209229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.209263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.209401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.209434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.209549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.209583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.209732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.209765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.209886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.209918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.210142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.210179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.210293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.210332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.210512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.153 [2024-11-20 09:59:38.210545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.153 qpair failed and we were unable to recover it. 00:27:15.153 [2024-11-20 09:59:38.210689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.210722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.210897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.210931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.211087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.211120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.211298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.211331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.211506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.211537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.211678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.211711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.211963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.211997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.212188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.212221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.212366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.212398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.212504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.212536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.212653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.212685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.212807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.212841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.213023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.213058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.213175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.213208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.213325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.213358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.213529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.213562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.213746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.213780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.213887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.213919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.214048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.214082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.214212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.214244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.214421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.214456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.214580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.214612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.214733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.214766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.214963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.214997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.215128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.215161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.215279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.215319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.215437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.215470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.215650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.215682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.215809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.215843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.215962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.215997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.216199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.216233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.216362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.216396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.216599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.216632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.216747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.216781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.216970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.217006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.217122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.217155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.217275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.217308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.217432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.154 [2024-11-20 09:59:38.217464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.154 qpair failed and we were unable to recover it. 00:27:15.154 [2024-11-20 09:59:38.217584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.217618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.217751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.217785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.217903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.217936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.218077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.218111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.218219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.218252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.218367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.218399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.218650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.218683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.218856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.218888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.219079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.219113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.219250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.219283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.219402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.219435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.219533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.219566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.219680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.219712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.219883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.219916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.220050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.220083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.220256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.220290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.220411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.220444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.220652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.220685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.220883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.220915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.221135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.221173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.221302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.221336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.221460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.221493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.221616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.221649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.221825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.221859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.222098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.222134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.222272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.222305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.222494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.222528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.222721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.222761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.222888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.222921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.223058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.223092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.223261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.223295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.223498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.223530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.223730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.223764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.223964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.223998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.224123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.224156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.224296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.224330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.224636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.224669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.224905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.155 [2024-11-20 09:59:38.224938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.155 qpair failed and we were unable to recover it. 00:27:15.155 [2024-11-20 09:59:38.225136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.225170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.225285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.225318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.225525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.225559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.225783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.225818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.226015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.226050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.226169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.226202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.226342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.226376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.226512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.226546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.226809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.226841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.227017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.227051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.227197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.227228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.227348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.227381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.227632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.227666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.227842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.227874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.228156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.228191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.228330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.228364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.228591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.228626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.228886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.228919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.229059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.229092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.229229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.229262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.229387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.229419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.229601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.229633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.229890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.229923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.230079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.230112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.230239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.230273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.230457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.230490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.230672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.230704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.230892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.230924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.231121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.231155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.231290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.231329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.231466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.231499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.231693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.156 [2024-11-20 09:59:38.231726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.156 qpair failed and we were unable to recover it. 00:27:15.156 [2024-11-20 09:59:38.232021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.232056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.232234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.232268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.232397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.232431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.232578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.232612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.232897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.232931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.233108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.233141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.233285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.233319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.233433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.233466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.233581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.233615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.233808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.233843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.234084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.234119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.234251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.234283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.234417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.234450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.234654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.234688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.234874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.234907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.235075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.235109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.235329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.235363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.235533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.235566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.235816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.235848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.235966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.236000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.236149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.236182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.236354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.236388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.236602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.236635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.236827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.236859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.237060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.237097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.237326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.237359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.237483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.237517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.237828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.237862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.238048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.238082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.238294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.238327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.157 [2024-11-20 09:59:38.238516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.238557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 [2024-11-20 09:59:38.238798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.238832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:15.157 [2024-11-20 09:59:38.239134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.157 [2024-11-20 09:59:38.239171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.157 qpair failed and we were unable to recover it. 00:27:15.157 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.157 [2024-11-20 09:59:38.239313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.239347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.239472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.239505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.158 [2024-11-20 09:59:38.239690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.239737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.239984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.240017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.240162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.240195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.240386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.240420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.240628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.240661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.240840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.240874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.241121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.241155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.241348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.241380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.241519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.241552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.241749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.241781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.241986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.242023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.242160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.242193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.242327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.242360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.242530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.242563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.242838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.242872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.243048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.243082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.243271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.243305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.243434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.243467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.243711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.243744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.243990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.244024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.244215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.244248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.244435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.244467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.244795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.244828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.245069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.245104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.245302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.245335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.245459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.245492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.245676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.245709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.245981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.246017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.246127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.158 [2024-11-20 09:59:38.246160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.158 qpair failed and we were unable to recover it. 00:27:15.158 [2024-11-20 09:59:38.246297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.246330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.246465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.246497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.246744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.246777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.246965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.247001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.247194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.247227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.247359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.247392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.247693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.247727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.247910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.247943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.248133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.248166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.248284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.248317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.248559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.248593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.248807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.248845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.249027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.249062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.249251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.249285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.249411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.249444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.249679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.249712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.249978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.250013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.250221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.250254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.250447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.250480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.250778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.250811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.251000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.251034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.251166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.251199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.251327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.251359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.251576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.251608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.251800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.251833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.251984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.252019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.252205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.252237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.252432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.252465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.252581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.252614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.252802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.252834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.252964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.159 [2024-11-20 09:59:38.252999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.159 qpair failed and we were unable to recover it. 00:27:15.159 [2024-11-20 09:59:38.253218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.253252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.253468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.253501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.253783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.253817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.254000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.254035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.254220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.254253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.254444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.254477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.254679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.254712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.254915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.254975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.255121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.255155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.255349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.255382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.255691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.255723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.255908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.255941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.256191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.256224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.256366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.256399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.256523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.256556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.256844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.256878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.257055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.257088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.257273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.257305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.257430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.257464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.257593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.257624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.257801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.257838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.258056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.258090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.258227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.258261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.258450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.258483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.258694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.258727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.258991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.259026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.259200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.259233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.259375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.259408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.259598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.259632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.259841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.259874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.260059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.260093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.260284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.260317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.260511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.260545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.260759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.260792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.260988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.261023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.261160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.261194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.261434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.261467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.160 [2024-11-20 09:59:38.261662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.160 [2024-11-20 09:59:38.261694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.160 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.261877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.261909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.262156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.262190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.262330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.262363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.262537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.262571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.262764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.262797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.263015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.263050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.263241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.263272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.263510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.263543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.263724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.263756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.263961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.263996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.264189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.264223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.264368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.264401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.264722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.264755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.265014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.265049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.265173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.265206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.265326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.265359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.265548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.265581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.265759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.265792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.266116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.266151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.266266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.266300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.266484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.266518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.266676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.266710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.266944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.266994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.267179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.267212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.267354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.267387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.267585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.267618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.267869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.267904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.268085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.268120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.268260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.268292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.268529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.268562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.268762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.268796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.268956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.268991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.269133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.269167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.269349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.269383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.269569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.269604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.269875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.269910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.270081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.270126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.270316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.270348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.270481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.270515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.161 [2024-11-20 09:59:38.270762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.161 [2024-11-20 09:59:38.270797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.161 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.270988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.271025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.271155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.271188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.271331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.271364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.271602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.271639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.271835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.271869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.272008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.272043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.272281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.272316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.272493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.272528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.272767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.272802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.273091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.273146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.273332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.273366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.273582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.273616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.273855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.273889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.274161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.274196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.274330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.274362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.274480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.274511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.274714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.274746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.274938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.274983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.275113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.275147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.275337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.275371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.275513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.275546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.275661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.275695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.275877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.275911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbba0 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.276130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.276170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.276413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.276446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 Malloc0 00:27:15.162 [2024-11-20 09:59:38.276574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.276607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.276869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.276902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.277103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.277139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.162 [2024-11-20 09:59:38.277349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.277384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.277558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.277591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:15.162 [2024-11-20 09:59:38.277818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.277852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.162 [2024-11-20 09:59:38.277984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.278050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.162 [2024-11-20 09:59:38.278269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.278303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.162 qpair failed and we were unable to recover it. 00:27:15.162 [2024-11-20 09:59:38.278494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.162 [2024-11-20 09:59:38.278527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.278698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.278737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.278928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.278993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.279234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.279268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.279510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.279542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.279676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.279708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.280012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.280048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.280303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.280335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.280470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.280503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.280772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.280805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.280983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.281017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.281233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.281266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.281451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.281484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.281680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.281712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.281976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.282010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.282155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.282188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.282386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.282420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.282611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.282645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.282895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.282928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.283132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.283166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.283375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.283409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.283694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.283727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.283984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.284019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.284067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.163 [2024-11-20 09:59:38.284234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.284268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.284506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.284540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.284777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.284811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.285041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.285077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.285197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.285230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.285418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.285452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.285621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.285654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.285836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.285868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.286174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.286209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.286389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.286421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.286611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.286643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.286827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.286860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.163 qpair failed and we were unable to recover it. 00:27:15.163 [2024-11-20 09:59:38.287103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.163 [2024-11-20 09:59:38.287137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.287310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.287343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.287464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.287497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.287796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.287830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.288044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.288079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.288269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.288302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.288442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.288476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.288735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.288767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.289059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.289094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.289286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.289319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.289457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.289490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.289749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.289783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.290067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.290102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.290295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.290329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.290443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.290476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.290692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.290725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.290966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.291001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.291127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.291159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.291357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.291389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.291608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.291652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.291786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.291819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.292024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.292059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.292276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.292308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.292437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.292470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.292695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.292728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba0000b90 with addr=10.0.0.2, port=4420 00:27:15.164 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.292959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.293006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.164 [2024-11-20 09:59:38.293137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.293172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.293312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.293344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.164 [2024-11-20 09:59:38.293636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.293670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.164 [2024-11-20 09:59:38.293874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.293907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.294034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.294068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.294266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.294298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.294433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.294465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.294595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.294627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.294887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.294919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.295198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.295239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.295457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.295490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.295703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.295736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.295912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.295945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.296212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.296246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.164 [2024-11-20 09:59:38.296428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.164 [2024-11-20 09:59:38.296461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.164 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.296762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.296795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.296908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.296942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.297201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.297235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.297431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.297467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.297685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.297717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.297888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.297919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.298172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.298205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.298446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.298478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.298678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.298710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.298898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.298930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.299129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.299163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.299349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.299381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.299515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.299548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.299670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.299702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.299922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.299966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.300092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.300122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.300257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.300296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.300582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.300614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.165 [2024-11-20 09:59:38.300849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.300881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.300996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.301029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.301158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:15.165 [2024-11-20 09:59:38.301191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.301428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.301460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.165 [2024-11-20 09:59:38.301687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.301720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.165 [2024-11-20 09:59:38.301902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.301934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.302137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.302170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.302345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.302377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.302558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.302590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.302770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.302802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.303068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.303103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.303222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.303255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.303466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.303498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.303799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.303831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.304012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.304046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.304216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.304249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.304376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.304409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.304650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.304682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.304873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.304905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.305156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.305191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.305373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.305406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.305701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.305733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.305848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.305878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.306067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.306107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.306373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.306405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.306611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.306643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.306884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.165 [2024-11-20 09:59:38.306916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.165 qpair failed and we were unable to recover it. 00:27:15.165 [2024-11-20 09:59:38.307063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.307101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.307294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.307327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.307523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.307556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.307796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.307830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.307943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.307991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.308136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.308169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.308455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.308489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.308674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.308706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.166 [2024-11-20 09:59:38.308965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.309000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7b9c000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.309119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.309155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.166 [2024-11-20 09:59:38.309387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.309419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.166 [2024-11-20 09:59:38.309535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.309568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.309806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.166 [2024-11-20 09:59:38.309838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.310122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.310156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.310300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.310333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.310547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.310580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.310781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.310814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.310995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.311028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.311266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.311299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.311488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.311519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.311694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.311727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.311997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.312032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.312168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.166 [2024-11-20 09:59:38.312202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7ba8000b90 with addr=10.0.0.2, port=4420 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.312305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:15.166 [2024-11-20 09:59:38.314802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.166 [2024-11-20 09:59:38.314912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.166 [2024-11-20 09:59:38.314968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.166 [2024-11-20 09:59:38.314994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.166 [2024-11-20 09:59:38.315013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.166 [2024-11-20 09:59:38.315079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:15.166 [2024-11-20 09:59:38.324720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.166 [2024-11-20 09:59:38.324801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.166 [2024-11-20 09:59:38.324829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.166 [2024-11-20 09:59:38.324845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.166 [2024-11-20 09:59:38.324858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.166 [2024-11-20 09:59:38.324892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 09:59:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3072352 00:27:15.166 [2024-11-20 09:59:38.334678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.166 [2024-11-20 09:59:38.334747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.166 [2024-11-20 09:59:38.334767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.166 [2024-11-20 09:59:38.334777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.166 [2024-11-20 09:59:38.334791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.166 [2024-11-20 09:59:38.334814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.344680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.166 [2024-11-20 09:59:38.344743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.166 [2024-11-20 09:59:38.344758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.166 [2024-11-20 09:59:38.344766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.166 [2024-11-20 09:59:38.344772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.166 [2024-11-20 09:59:38.344788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.354677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.166 [2024-11-20 09:59:38.354739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.166 [2024-11-20 09:59:38.354753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.166 [2024-11-20 09:59:38.354761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.166 [2024-11-20 09:59:38.354768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.166 [2024-11-20 09:59:38.354783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.166 qpair failed and we were unable to recover it. 00:27:15.166 [2024-11-20 09:59:38.364686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.364773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.364788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.364795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.364802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.364817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.167 [2024-11-20 09:59:38.374670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.374725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.374738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.374745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.374752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.374768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.167 [2024-11-20 09:59:38.384733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.384840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.384855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.384862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.384869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.384885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.167 [2024-11-20 09:59:38.394692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.394751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.394765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.394773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.394780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.394795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.167 [2024-11-20 09:59:38.404794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.404846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.404861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.404869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.404875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.404890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.167 [2024-11-20 09:59:38.414816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.414921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.414935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.414942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.414952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.414968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.167 [2024-11-20 09:59:38.424798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.424854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.424872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.424880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.424888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.424903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.167 [2024-11-20 09:59:38.434861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.434919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.434933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.434941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.434952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.434969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.167 [2024-11-20 09:59:38.444880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.444975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.444990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.444997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.445004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.445020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.167 [2024-11-20 09:59:38.454902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.167 [2024-11-20 09:59:38.454975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.167 [2024-11-20 09:59:38.454990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.167 [2024-11-20 09:59:38.454998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.167 [2024-11-20 09:59:38.455004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.167 [2024-11-20 09:59:38.455020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.167 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.464971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.465074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.465090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.465098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.465109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.465125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.475040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.475094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.475108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.475115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.475122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.475138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.485001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.485060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.485076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.485083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.485090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.485106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.495021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.495122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.495137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.495144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.495151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.495166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.505047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.505107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.505122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.505130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.505136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.505152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.515075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.515157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.515172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.515179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.515186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.515201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.525102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.525156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.525170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.525178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.525186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.525200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.535127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.535194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.535209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.535216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.535222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.535238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.545177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.545235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.545249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.545256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.545263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.545280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.555204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.555260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.555277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.555285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.555292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.555308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.428 [2024-11-20 09:59:38.565232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.428 [2024-11-20 09:59:38.565326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.428 [2024-11-20 09:59:38.565341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.428 [2024-11-20 09:59:38.565348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.428 [2024-11-20 09:59:38.565354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.428 [2024-11-20 09:59:38.565369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.428 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.575250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.575305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.575319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.575326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.575333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.575348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.585294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.585360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.585376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.585384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.585390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.585407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.595343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.595402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.595416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.595427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.595433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.595449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.605336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.605390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.605403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.605410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.605417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.605432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.615359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.615414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.615427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.615435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.615442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.615458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.625401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.625458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.625472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.625480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.625486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.625503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.635421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.635480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.635494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.635501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.635508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.635526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.645466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.645526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.645540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.645548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.645554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.645569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.655469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.655527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.655542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.655549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.655555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.655571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.665508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.665591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.665606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.665614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.665620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.665636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.675541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.675598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.675611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.675618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.675625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.675641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.685556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.685617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.685632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.685639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.685645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.685662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.429 [2024-11-20 09:59:38.695654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.429 [2024-11-20 09:59:38.695710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.429 [2024-11-20 09:59:38.695723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.429 [2024-11-20 09:59:38.695730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.429 [2024-11-20 09:59:38.695737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.429 [2024-11-20 09:59:38.695752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.429 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 09:59:38.705642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.430 [2024-11-20 09:59:38.705750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.430 [2024-11-20 09:59:38.705764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.430 [2024-11-20 09:59:38.705772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.430 [2024-11-20 09:59:38.705778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.430 [2024-11-20 09:59:38.705793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 09:59:38.715665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.430 [2024-11-20 09:59:38.715731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.430 [2024-11-20 09:59:38.715746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.430 [2024-11-20 09:59:38.715753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.430 [2024-11-20 09:59:38.715760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.430 [2024-11-20 09:59:38.715775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 09:59:38.725709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.430 [2024-11-20 09:59:38.725770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.430 [2024-11-20 09:59:38.725784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.430 [2024-11-20 09:59:38.725795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.430 [2024-11-20 09:59:38.725801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.430 [2024-11-20 09:59:38.725817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 09:59:38.735695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.430 [2024-11-20 09:59:38.735750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.430 [2024-11-20 09:59:38.735765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.430 [2024-11-20 09:59:38.735773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.430 [2024-11-20 09:59:38.735780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.430 [2024-11-20 09:59:38.735796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 09:59:38.745745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.430 [2024-11-20 09:59:38.745809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.430 [2024-11-20 09:59:38.745824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.430 [2024-11-20 09:59:38.745832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.430 [2024-11-20 09:59:38.745838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.430 [2024-11-20 09:59:38.745854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.430 [2024-11-20 09:59:38.755761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.430 [2024-11-20 09:59:38.755813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.430 [2024-11-20 09:59:38.755827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.430 [2024-11-20 09:59:38.755834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.430 [2024-11-20 09:59:38.755841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.430 [2024-11-20 09:59:38.755856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.430 qpair failed and we were unable to recover it. 00:27:15.690 [2024-11-20 09:59:38.765799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.690 [2024-11-20 09:59:38.765853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.690 [2024-11-20 09:59:38.765868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.690 [2024-11-20 09:59:38.765875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.765883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.765902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.775870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.775927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.775941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.775953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.775960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.775977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.785860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.785926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.785942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.785954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.785961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.785977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.795881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.795937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.795955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.795963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.795969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.795986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.805905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.805967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.805981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.805988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.805995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.806011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.815931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.815991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.816005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.816013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.816019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.816036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.825977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.826037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.826051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.826059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.826065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.826081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.835999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.836079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.836094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.836101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.836107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.836123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.846024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.846083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.846097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.846105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.846111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.846126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.856053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.856118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.856137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.856144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.856150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.856166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.866110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.866217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.866231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.866239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.866245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.866260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.876050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.876110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.876124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.876131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.876138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.876153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.886151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.886205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.691 [2024-11-20 09:59:38.886221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.691 [2024-11-20 09:59:38.886228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.691 [2024-11-20 09:59:38.886234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.691 [2024-11-20 09:59:38.886249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-11-20 09:59:38.896174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.691 [2024-11-20 09:59:38.896232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.896246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.896253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.896262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.896278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.906224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.906285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.906299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.906307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.906313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.906329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.916234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.916296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.916311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.916318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.916324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.916340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.926191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.926247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.926262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.926270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.926277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.926294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.936283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.936340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.936353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.936361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.936368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.936384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.946372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.946437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.946451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.946460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.946466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.946481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.956397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.956486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.956500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.956507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.956514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.956529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.966370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.966426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.966440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.966448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.966455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.966470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.976438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.976504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.976519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.976526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.976533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.976547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.986435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.986495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.986514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.986521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.986527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.986545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:38.996469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:38.996526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:38.996540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:38.996548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:38.996555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:38.996571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:39.006500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:39.006558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:39.006572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:39.006580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:39.006586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:39.006601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-11-20 09:59:39.016515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.692 [2024-11-20 09:59:39.016571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.692 [2024-11-20 09:59:39.016584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.692 [2024-11-20 09:59:39.016592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.692 [2024-11-20 09:59:39.016599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.692 [2024-11-20 09:59:39.016614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.026493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.026553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.953 [2024-11-20 09:59:39.026567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.953 [2024-11-20 09:59:39.026574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.953 [2024-11-20 09:59:39.026585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.953 [2024-11-20 09:59:39.026600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.953 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.036678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.036759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.953 [2024-11-20 09:59:39.036773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.953 [2024-11-20 09:59:39.036781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.953 [2024-11-20 09:59:39.036787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.953 [2024-11-20 09:59:39.036803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.953 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.046593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.046648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.953 [2024-11-20 09:59:39.046662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.953 [2024-11-20 09:59:39.046669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.953 [2024-11-20 09:59:39.046677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.953 [2024-11-20 09:59:39.046692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.953 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.056585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.056638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.953 [2024-11-20 09:59:39.056653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.953 [2024-11-20 09:59:39.056660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.953 [2024-11-20 09:59:39.056667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.953 [2024-11-20 09:59:39.056684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.953 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.066612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.066667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.953 [2024-11-20 09:59:39.066681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.953 [2024-11-20 09:59:39.066688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.953 [2024-11-20 09:59:39.066695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.953 [2024-11-20 09:59:39.066711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.953 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.076720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.076773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.953 [2024-11-20 09:59:39.076787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.953 [2024-11-20 09:59:39.076795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.953 [2024-11-20 09:59:39.076802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.953 [2024-11-20 09:59:39.076817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.953 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.086727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.086783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.953 [2024-11-20 09:59:39.086797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.953 [2024-11-20 09:59:39.086805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.953 [2024-11-20 09:59:39.086812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.953 [2024-11-20 09:59:39.086828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.953 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.096761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.096815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.953 [2024-11-20 09:59:39.096829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.953 [2024-11-20 09:59:39.096837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.953 [2024-11-20 09:59:39.096844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.953 [2024-11-20 09:59:39.096860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.953 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.106899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.106975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.953 [2024-11-20 09:59:39.106990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.953 [2024-11-20 09:59:39.106998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.953 [2024-11-20 09:59:39.107004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.953 [2024-11-20 09:59:39.107020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.953 qpair failed and we were unable to recover it. 00:27:15.953 [2024-11-20 09:59:39.116865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.953 [2024-11-20 09:59:39.116920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.116938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.116945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.116956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.116972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.126894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.126950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.126964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.126972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.126979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.126994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.136973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.137074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.137089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.137096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.137102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.137118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.146918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.146988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.147002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.147011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.147017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.147031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.156984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.157039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.157053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.157063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.157070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.157085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.166981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.167039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.167053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.167060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.167067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.167082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.176986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.177038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.177053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.177061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.177068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.177085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.187035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.187091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.187106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.187113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.187119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.187135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.197048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.197098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.197112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.197120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.197126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.197146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.207080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.207140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.207154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.207162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.207169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.207184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.217101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.217157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.217171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.217179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.217186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.217201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.227145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.227204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.227218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.227225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.227232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.227248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.954 [2024-11-20 09:59:39.237172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.954 [2024-11-20 09:59:39.237228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.954 [2024-11-20 09:59:39.237242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.954 [2024-11-20 09:59:39.237249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.954 [2024-11-20 09:59:39.237256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.954 [2024-11-20 09:59:39.237271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.954 qpair failed and we were unable to recover it. 00:27:15.955 [2024-11-20 09:59:39.247251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.955 [2024-11-20 09:59:39.247362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.955 [2024-11-20 09:59:39.247378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.955 [2024-11-20 09:59:39.247386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.955 [2024-11-20 09:59:39.247393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.955 [2024-11-20 09:59:39.247409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.955 qpair failed and we were unable to recover it. 00:27:15.955 [2024-11-20 09:59:39.257253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.955 [2024-11-20 09:59:39.257311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.955 [2024-11-20 09:59:39.257325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.955 [2024-11-20 09:59:39.257332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.955 [2024-11-20 09:59:39.257339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.955 [2024-11-20 09:59:39.257355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.955 qpair failed and we were unable to recover it. 00:27:15.955 [2024-11-20 09:59:39.267256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.955 [2024-11-20 09:59:39.267333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.955 [2024-11-20 09:59:39.267348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.955 [2024-11-20 09:59:39.267355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.955 [2024-11-20 09:59:39.267361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.955 [2024-11-20 09:59:39.267376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.955 qpair failed and we were unable to recover it. 00:27:15.955 [2024-11-20 09:59:39.277315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:15.955 [2024-11-20 09:59:39.277393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:15.955 [2024-11-20 09:59:39.277407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:15.955 [2024-11-20 09:59:39.277414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:15.955 [2024-11-20 09:59:39.277421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:15.955 [2024-11-20 09:59:39.277436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.955 qpair failed and we were unable to recover it. 00:27:16.214 [2024-11-20 09:59:39.287307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.214 [2024-11-20 09:59:39.287366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.214 [2024-11-20 09:59:39.287381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.214 [2024-11-20 09:59:39.287392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.214 [2024-11-20 09:59:39.287398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.214 [2024-11-20 09:59:39.287415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.214 qpair failed and we were unable to recover it. 00:27:16.214 [2024-11-20 09:59:39.297346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.214 [2024-11-20 09:59:39.297402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.214 [2024-11-20 09:59:39.297416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.214 [2024-11-20 09:59:39.297423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.214 [2024-11-20 09:59:39.297430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.214 [2024-11-20 09:59:39.297445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.214 qpair failed and we were unable to recover it. 00:27:16.214 [2024-11-20 09:59:39.307400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.214 [2024-11-20 09:59:39.307456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.214 [2024-11-20 09:59:39.307470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.214 [2024-11-20 09:59:39.307477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.214 [2024-11-20 09:59:39.307483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.307498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.317407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.317468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.317482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.317490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.317496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.317511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.327429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.327513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.327527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.327534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.327540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.327558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.337484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.337540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.337555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.337562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.337568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.337584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.347527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.347634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.347648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.347656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.347662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.347678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.357450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.357509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.357523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.357531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.357537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.357552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.367577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.367636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.367651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.367659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.367665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.367680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.377590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.377648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.377661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.377669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.377675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.377691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.387542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.387627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.387642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.387650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.387656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.387671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.397560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.397614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.397628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.397636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.397642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.397657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.407576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.407632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.407646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.407653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.407660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.407676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.417673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.417725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.417741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.417749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.417755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.417771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.427628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.427697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.427712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.427720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.215 [2024-11-20 09:59:39.427728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.215 [2024-11-20 09:59:39.427744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.215 qpair failed and we were unable to recover it. 00:27:16.215 [2024-11-20 09:59:39.437719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.215 [2024-11-20 09:59:39.437788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.215 [2024-11-20 09:59:39.437802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.215 [2024-11-20 09:59:39.437810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.437816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.437831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.447742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.447799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.447813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.447820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.447826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.447842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.457704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.457762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.457776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.457783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.457793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.457808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.467780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.467838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.467852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.467859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.467866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.467881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.477770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.477848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.477863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.477870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.477876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.477891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.487796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.487861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.487877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.487885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.487892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.487908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.497821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.497874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.497889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.497896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.497902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.497918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.507922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.507995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.508010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.508017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.508023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.508039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.517936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.518003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.518017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.518025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.518030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.518046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.527999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.528064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.528077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.528085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.528091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.528106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.216 [2024-11-20 09:59:39.538004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.216 [2024-11-20 09:59:39.538060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.216 [2024-11-20 09:59:39.538073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.216 [2024-11-20 09:59:39.538080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.216 [2024-11-20 09:59:39.538087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.216 [2024-11-20 09:59:39.538102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.216 qpair failed and we were unable to recover it. 00:27:16.477 [2024-11-20 09:59:39.547969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.477 [2024-11-20 09:59:39.548029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.477 [2024-11-20 09:59:39.548046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.477 [2024-11-20 09:59:39.548053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.477 [2024-11-20 09:59:39.548059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.477 [2024-11-20 09:59:39.548075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.477 qpair failed and we were unable to recover it. 00:27:16.477 [2024-11-20 09:59:39.558046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.477 [2024-11-20 09:59:39.558104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.477 [2024-11-20 09:59:39.558118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.477 [2024-11-20 09:59:39.558126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.477 [2024-11-20 09:59:39.558132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.477 [2024-11-20 09:59:39.558147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.477 qpair failed and we were unable to recover it. 00:27:16.477 [2024-11-20 09:59:39.568083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.477 [2024-11-20 09:59:39.568139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.477 [2024-11-20 09:59:39.568153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.477 [2024-11-20 09:59:39.568160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.477 [2024-11-20 09:59:39.568167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.477 [2024-11-20 09:59:39.568182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.477 qpair failed and we were unable to recover it. 00:27:16.477 [2024-11-20 09:59:39.578038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.477 [2024-11-20 09:59:39.578095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.477 [2024-11-20 09:59:39.578108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.477 [2024-11-20 09:59:39.578116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.477 [2024-11-20 09:59:39.578124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.477 [2024-11-20 09:59:39.578139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.477 qpair failed and we were unable to recover it. 00:27:16.477 [2024-11-20 09:59:39.588177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.477 [2024-11-20 09:59:39.588257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.477 [2024-11-20 09:59:39.588272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.477 [2024-11-20 09:59:39.588279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.477 [2024-11-20 09:59:39.588288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.477 [2024-11-20 09:59:39.588305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.477 qpair failed and we were unable to recover it. 00:27:16.477 [2024-11-20 09:59:39.598122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.477 [2024-11-20 09:59:39.598214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.477 [2024-11-20 09:59:39.598228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.477 [2024-11-20 09:59:39.598236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.477 [2024-11-20 09:59:39.598242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.477 [2024-11-20 09:59:39.598256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.477 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.608203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.608282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.608298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.608305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.608311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.608327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.618217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.618271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.618285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.618292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.618299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.618314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.628213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.628272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.628287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.628294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.628301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.628317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.638204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.638308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.638322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.638330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.638336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.638351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.648249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.648303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.648316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.648323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.648330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.648346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.658340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.658394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.658408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.658415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.658422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.658438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.668353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.668408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.668422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.668429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.668436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.668452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.678361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.678421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.678439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.678446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.678452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.678468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.688368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.688426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.688441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.688448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.688455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.688470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.698498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.698556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.698571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.698578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.698585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.698601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.708422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.708476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.708490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.708497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.708503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.708519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.718550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.718611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.718625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.718637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.478 [2024-11-20 09:59:39.718643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.478 [2024-11-20 09:59:39.718659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.478 qpair failed and we were unable to recover it. 00:27:16.478 [2024-11-20 09:59:39.728560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.478 [2024-11-20 09:59:39.728614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.478 [2024-11-20 09:59:39.728627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.478 [2024-11-20 09:59:39.728635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.479 [2024-11-20 09:59:39.728642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.479 [2024-11-20 09:59:39.728658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.479 qpair failed and we were unable to recover it. 00:27:16.479 [2024-11-20 09:59:39.738529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.479 [2024-11-20 09:59:39.738594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.479 [2024-11-20 09:59:39.738608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.479 [2024-11-20 09:59:39.738616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.479 [2024-11-20 09:59:39.738622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.479 [2024-11-20 09:59:39.738637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.479 qpair failed and we were unable to recover it. 00:27:16.479 [2024-11-20 09:59:39.748614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.479 [2024-11-20 09:59:39.748670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.479 [2024-11-20 09:59:39.748683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.479 [2024-11-20 09:59:39.748690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.479 [2024-11-20 09:59:39.748697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.479 [2024-11-20 09:59:39.748712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.479 qpair failed and we were unable to recover it. 00:27:16.479 [2024-11-20 09:59:39.758691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.479 [2024-11-20 09:59:39.758751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.479 [2024-11-20 09:59:39.758765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.479 [2024-11-20 09:59:39.758772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.479 [2024-11-20 09:59:39.758779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.479 [2024-11-20 09:59:39.758798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.479 qpair failed and we were unable to recover it. 00:27:16.479 [2024-11-20 09:59:39.768598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.479 [2024-11-20 09:59:39.768654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.479 [2024-11-20 09:59:39.768668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.479 [2024-11-20 09:59:39.768675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.479 [2024-11-20 09:59:39.768682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.479 [2024-11-20 09:59:39.768697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.479 qpair failed and we were unable to recover it. 00:27:16.479 [2024-11-20 09:59:39.778698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.479 [2024-11-20 09:59:39.778780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.479 [2024-11-20 09:59:39.778794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.479 [2024-11-20 09:59:39.778800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.479 [2024-11-20 09:59:39.778807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.479 [2024-11-20 09:59:39.778821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.479 qpair failed and we were unable to recover it. 00:27:16.479 [2024-11-20 09:59:39.788661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.479 [2024-11-20 09:59:39.788719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.479 [2024-11-20 09:59:39.788734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.479 [2024-11-20 09:59:39.788741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.479 [2024-11-20 09:59:39.788748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.479 [2024-11-20 09:59:39.788764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.479 qpair failed and we were unable to recover it. 00:27:16.479 [2024-11-20 09:59:39.798765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.479 [2024-11-20 09:59:39.798819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.479 [2024-11-20 09:59:39.798834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.479 [2024-11-20 09:59:39.798841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.479 [2024-11-20 09:59:39.798848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.479 [2024-11-20 09:59:39.798864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.479 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 09:59:39.808742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.739 [2024-11-20 09:59:39.808801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.739 [2024-11-20 09:59:39.808816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.739 [2024-11-20 09:59:39.808823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.739 [2024-11-20 09:59:39.808829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.739 [2024-11-20 09:59:39.808845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 09:59:39.818833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.739 [2024-11-20 09:59:39.818885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.739 [2024-11-20 09:59:39.818899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.739 [2024-11-20 09:59:39.818906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.739 [2024-11-20 09:59:39.818913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.739 [2024-11-20 09:59:39.818928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 09:59:39.828793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.739 [2024-11-20 09:59:39.828851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.739 [2024-11-20 09:59:39.828864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.739 [2024-11-20 09:59:39.828872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.739 [2024-11-20 09:59:39.828878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.739 [2024-11-20 09:59:39.828894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 09:59:39.838885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.739 [2024-11-20 09:59:39.838940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.739 [2024-11-20 09:59:39.838961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.739 [2024-11-20 09:59:39.838968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.739 [2024-11-20 09:59:39.838975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.739 [2024-11-20 09:59:39.838990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 09:59:39.848907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.739 [2024-11-20 09:59:39.848971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.739 [2024-11-20 09:59:39.848985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.739 [2024-11-20 09:59:39.848996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.739 [2024-11-20 09:59:39.849002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.739 [2024-11-20 09:59:39.849018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.739 qpair failed and we were unable to recover it. 00:27:16.739 [2024-11-20 09:59:39.858987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.859045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.859059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.859067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.859074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.859089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.868973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.869037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.869051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.869059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.869065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.869080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.879002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.879085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.879100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.879107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.879113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.879128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.889017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.889072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.889087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.889094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.889100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.889120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.899045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.899096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.899110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.899117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.899123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.899138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.909097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.909191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.909205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.909213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.909219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.909235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.919132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.919192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.919205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.919212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.919219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.919234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.929135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.929190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.929204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.929211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.929218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.929234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.939172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.939235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.939249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.939256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.939262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.939278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.949223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.949280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.949294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.949300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.949308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.949323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.959241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.959298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.959312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.959319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.959326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.959341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.969256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.969305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.969319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.969326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.969333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.969348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.979283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.740 [2024-11-20 09:59:39.979340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.740 [2024-11-20 09:59:39.979357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.740 [2024-11-20 09:59:39.979365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.740 [2024-11-20 09:59:39.979372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.740 [2024-11-20 09:59:39.979387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.740 qpair failed and we were unable to recover it. 00:27:16.740 [2024-11-20 09:59:39.989327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.741 [2024-11-20 09:59:39.989392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.741 [2024-11-20 09:59:39.989407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.741 [2024-11-20 09:59:39.989414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.741 [2024-11-20 09:59:39.989420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.741 [2024-11-20 09:59:39.989437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 09:59:39.999353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.741 [2024-11-20 09:59:39.999411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.741 [2024-11-20 09:59:39.999426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.741 [2024-11-20 09:59:39.999433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.741 [2024-11-20 09:59:39.999439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.741 [2024-11-20 09:59:39.999454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 09:59:40.009327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.741 [2024-11-20 09:59:40.009388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.741 [2024-11-20 09:59:40.009407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.741 [2024-11-20 09:59:40.009415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.741 [2024-11-20 09:59:40.009422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.741 [2024-11-20 09:59:40.009440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 09:59:40.019325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.741 [2024-11-20 09:59:40.019385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.741 [2024-11-20 09:59:40.019400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.741 [2024-11-20 09:59:40.019408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.741 [2024-11-20 09:59:40.019417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.741 [2024-11-20 09:59:40.019434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 09:59:40.029418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.741 [2024-11-20 09:59:40.029477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.741 [2024-11-20 09:59:40.029492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.741 [2024-11-20 09:59:40.029500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.741 [2024-11-20 09:59:40.029507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.741 [2024-11-20 09:59:40.029523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 09:59:40.039529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.741 [2024-11-20 09:59:40.039595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.741 [2024-11-20 09:59:40.039610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.741 [2024-11-20 09:59:40.039618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.741 [2024-11-20 09:59:40.039624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.741 [2024-11-20 09:59:40.039640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 09:59:40.049504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.741 [2024-11-20 09:59:40.049564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.741 [2024-11-20 09:59:40.049581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.741 [2024-11-20 09:59:40.049589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.741 [2024-11-20 09:59:40.049595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.741 [2024-11-20 09:59:40.049613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.741 qpair failed and we were unable to recover it. 00:27:16.741 [2024-11-20 09:59:40.059486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:16.741 [2024-11-20 09:59:40.059541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:16.741 [2024-11-20 09:59:40.059566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:16.741 [2024-11-20 09:59:40.059577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:16.741 [2024-11-20 09:59:40.059590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:16.741 [2024-11-20 09:59:40.059614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.741 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 09:59:40.069594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.002 [2024-11-20 09:59:40.069697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.002 [2024-11-20 09:59:40.069715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.002 [2024-11-20 09:59:40.069723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.002 [2024-11-20 09:59:40.069729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.002 [2024-11-20 09:59:40.069746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 09:59:40.079544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.002 [2024-11-20 09:59:40.079599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.002 [2024-11-20 09:59:40.079616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.002 [2024-11-20 09:59:40.079624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.002 [2024-11-20 09:59:40.079631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.002 [2024-11-20 09:59:40.079648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 09:59:40.089655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.002 [2024-11-20 09:59:40.089715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.002 [2024-11-20 09:59:40.089733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.002 [2024-11-20 09:59:40.089741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.002 [2024-11-20 09:59:40.089747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.002 [2024-11-20 09:59:40.089764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 09:59:40.099681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.002 [2024-11-20 09:59:40.099750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.002 [2024-11-20 09:59:40.099766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.002 [2024-11-20 09:59:40.099774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.002 [2024-11-20 09:59:40.099781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.002 [2024-11-20 09:59:40.099798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 09:59:40.109618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.002 [2024-11-20 09:59:40.109678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.002 [2024-11-20 09:59:40.109698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.002 [2024-11-20 09:59:40.109706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.002 [2024-11-20 09:59:40.109712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.002 [2024-11-20 09:59:40.109729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 09:59:40.119699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.002 [2024-11-20 09:59:40.119758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.002 [2024-11-20 09:59:40.119774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.002 [2024-11-20 09:59:40.119782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.002 [2024-11-20 09:59:40.119789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.002 [2024-11-20 09:59:40.119805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 09:59:40.129743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.002 [2024-11-20 09:59:40.129808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.002 [2024-11-20 09:59:40.129824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.002 [2024-11-20 09:59:40.129831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.002 [2024-11-20 09:59:40.129838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.002 [2024-11-20 09:59:40.129855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 09:59:40.139768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.002 [2024-11-20 09:59:40.139827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.002 [2024-11-20 09:59:40.139844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.002 [2024-11-20 09:59:40.139852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.002 [2024-11-20 09:59:40.139858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.002 [2024-11-20 09:59:40.139877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.002 qpair failed and we were unable to recover it. 00:27:17.002 [2024-11-20 09:59:40.149791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.002 [2024-11-20 09:59:40.149868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.149885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.149893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.149903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.149920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.159824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.159881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.159898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.159906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.159913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.159929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.169841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.169895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.169911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.169919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.169925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.169942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.179901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.179989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.180006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.180014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.180021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.180037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.189929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.190002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.190019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.190026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.190033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.190050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.199930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.199996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.200022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.200030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.200037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.200059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.209887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.209955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.209972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.209980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.209987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.210004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.219987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.220042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.220059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.220066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.220073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.220090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.230031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.230107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.230123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.230131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.230139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.230156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.240095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.240156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.240178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.240190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.240198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.240215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.250061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.250122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.250138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.250146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.250153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.250170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.260096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.260151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.260167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.260175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.260182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.260198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.270141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.270200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.270216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.003 [2024-11-20 09:59:40.270224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.003 [2024-11-20 09:59:40.270230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.003 [2024-11-20 09:59:40.270250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.003 qpair failed and we were unable to recover it. 00:27:17.003 [2024-11-20 09:59:40.280188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.003 [2024-11-20 09:59:40.280259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.003 [2024-11-20 09:59:40.280275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.004 [2024-11-20 09:59:40.280286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.004 [2024-11-20 09:59:40.280293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.004 [2024-11-20 09:59:40.280309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 09:59:40.290185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.004 [2024-11-20 09:59:40.290242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.004 [2024-11-20 09:59:40.290258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.004 [2024-11-20 09:59:40.290266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.004 [2024-11-20 09:59:40.290272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.004 [2024-11-20 09:59:40.290289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 09:59:40.300192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.004 [2024-11-20 09:59:40.300252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.004 [2024-11-20 09:59:40.300270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.004 [2024-11-20 09:59:40.300278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.004 [2024-11-20 09:59:40.300285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.004 [2024-11-20 09:59:40.300302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 09:59:40.310266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.004 [2024-11-20 09:59:40.310338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.004 [2024-11-20 09:59:40.310355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.004 [2024-11-20 09:59:40.310362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.004 [2024-11-20 09:59:40.310369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.004 [2024-11-20 09:59:40.310386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 09:59:40.320278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.004 [2024-11-20 09:59:40.320354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.004 [2024-11-20 09:59:40.320371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.004 [2024-11-20 09:59:40.320379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.004 [2024-11-20 09:59:40.320385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.004 [2024-11-20 09:59:40.320405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.004 [2024-11-20 09:59:40.330361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.004 [2024-11-20 09:59:40.330469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.004 [2024-11-20 09:59:40.330485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.004 [2024-11-20 09:59:40.330492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.004 [2024-11-20 09:59:40.330499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.004 [2024-11-20 09:59:40.330516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.004 qpair failed and we were unable to recover it. 00:27:17.265 [2024-11-20 09:59:40.340345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.265 [2024-11-20 09:59:40.340406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.265 [2024-11-20 09:59:40.340423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.265 [2024-11-20 09:59:40.340431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.265 [2024-11-20 09:59:40.340437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.265 [2024-11-20 09:59:40.340454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.265 qpair failed and we were unable to recover it. 00:27:17.265 [2024-11-20 09:59:40.350393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.265 [2024-11-20 09:59:40.350499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.265 [2024-11-20 09:59:40.350515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.265 [2024-11-20 09:59:40.350522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.265 [2024-11-20 09:59:40.350529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.265 [2024-11-20 09:59:40.350546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.265 qpair failed and we were unable to recover it. 00:27:17.265 [2024-11-20 09:59:40.360399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.265 [2024-11-20 09:59:40.360457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.265 [2024-11-20 09:59:40.360472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.265 [2024-11-20 09:59:40.360480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.265 [2024-11-20 09:59:40.360487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.265 [2024-11-20 09:59:40.360504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.265 qpair failed and we were unable to recover it. 00:27:17.265 [2024-11-20 09:59:40.370419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.265 [2024-11-20 09:59:40.370480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.265 [2024-11-20 09:59:40.370496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.265 [2024-11-20 09:59:40.370504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.265 [2024-11-20 09:59:40.370510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.265 [2024-11-20 09:59:40.370526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.265 qpair failed and we were unable to recover it. 00:27:17.265 [2024-11-20 09:59:40.380493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.265 [2024-11-20 09:59:40.380551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.265 [2024-11-20 09:59:40.380567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.265 [2024-11-20 09:59:40.380574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.265 [2024-11-20 09:59:40.380581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.265 [2024-11-20 09:59:40.380597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.265 qpair failed and we were unable to recover it. 00:27:17.265 [2024-11-20 09:59:40.390480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.265 [2024-11-20 09:59:40.390537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.265 [2024-11-20 09:59:40.390553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.265 [2024-11-20 09:59:40.390561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.390568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.390585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.400508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.400567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.400583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.400591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.400597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.400614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.410534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.410593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.410608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.410620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.410626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.410643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.420557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.420655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.420671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.420679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.420686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.420703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.430544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.430600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.430616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.430624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.430630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.430647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.440640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.440721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.440738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.440747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.440756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.440777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.450666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.450721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.450738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.450746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.450752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.450772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.460674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.460729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.460746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.460754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.460760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.460777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.470686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.470744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.470760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.470768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.470774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.470791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.480734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.480808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.480825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.480833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.480839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.480855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.490773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.490828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.490844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.490852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.490858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.490875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.500737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.500799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.500816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.500823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.500830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.500846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.266 [2024-11-20 09:59:40.510820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.266 [2024-11-20 09:59:40.510877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.266 [2024-11-20 09:59:40.510893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.266 [2024-11-20 09:59:40.510901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.266 [2024-11-20 09:59:40.510908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.266 [2024-11-20 09:59:40.510925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.266 qpair failed and we were unable to recover it. 00:27:17.267 [2024-11-20 09:59:40.520848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.267 [2024-11-20 09:59:40.520900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.267 [2024-11-20 09:59:40.520916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.267 [2024-11-20 09:59:40.520924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.267 [2024-11-20 09:59:40.520931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.267 [2024-11-20 09:59:40.520951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.267 qpair failed and we were unable to recover it. 00:27:17.267 [2024-11-20 09:59:40.530815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.267 [2024-11-20 09:59:40.530887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.267 [2024-11-20 09:59:40.530904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.267 [2024-11-20 09:59:40.530911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.267 [2024-11-20 09:59:40.530918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.267 [2024-11-20 09:59:40.530935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.267 qpair failed and we were unable to recover it. 00:27:17.267 [2024-11-20 09:59:40.540945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.267 [2024-11-20 09:59:40.541013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.267 [2024-11-20 09:59:40.541031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.267 [2024-11-20 09:59:40.541038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.267 [2024-11-20 09:59:40.541045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.267 [2024-11-20 09:59:40.541060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.267 qpair failed and we were unable to recover it. 00:27:17.267 [2024-11-20 09:59:40.550936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.267 [2024-11-20 09:59:40.550999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.267 [2024-11-20 09:59:40.551013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.267 [2024-11-20 09:59:40.551021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.267 [2024-11-20 09:59:40.551028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.267 [2024-11-20 09:59:40.551043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.267 qpair failed and we were unable to recover it. 00:27:17.267 [2024-11-20 09:59:40.560978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.267 [2024-11-20 09:59:40.561041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.267 [2024-11-20 09:59:40.561056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.267 [2024-11-20 09:59:40.561063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.267 [2024-11-20 09:59:40.561071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.267 [2024-11-20 09:59:40.561087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.267 qpair failed and we were unable to recover it. 00:27:17.267 [2024-11-20 09:59:40.570989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.267 [2024-11-20 09:59:40.571065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.267 [2024-11-20 09:59:40.571079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.267 [2024-11-20 09:59:40.571087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.267 [2024-11-20 09:59:40.571093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.267 [2024-11-20 09:59:40.571108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.267 qpair failed and we were unable to recover it. 00:27:17.267 [2024-11-20 09:59:40.581026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.267 [2024-11-20 09:59:40.581084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.267 [2024-11-20 09:59:40.581098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.267 [2024-11-20 09:59:40.581106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.267 [2024-11-20 09:59:40.581116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.267 [2024-11-20 09:59:40.581131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.267 qpair failed and we were unable to recover it. 00:27:17.267 [2024-11-20 09:59:40.591068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.267 [2024-11-20 09:59:40.591129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.267 [2024-11-20 09:59:40.591146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.267 [2024-11-20 09:59:40.591154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.267 [2024-11-20 09:59:40.591161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.267 [2024-11-20 09:59:40.591178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.267 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.601131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.601240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.529 [2024-11-20 09:59:40.601256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.529 [2024-11-20 09:59:40.601264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.529 [2024-11-20 09:59:40.601270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.529 [2024-11-20 09:59:40.601286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.529 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.611111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.611163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.529 [2024-11-20 09:59:40.611177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.529 [2024-11-20 09:59:40.611183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.529 [2024-11-20 09:59:40.611190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.529 [2024-11-20 09:59:40.611206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.529 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.621138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.621198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.529 [2024-11-20 09:59:40.621212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.529 [2024-11-20 09:59:40.621220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.529 [2024-11-20 09:59:40.621226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.529 [2024-11-20 09:59:40.621241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.529 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.631211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.631312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.529 [2024-11-20 09:59:40.631328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.529 [2024-11-20 09:59:40.631336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.529 [2024-11-20 09:59:40.631344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.529 [2024-11-20 09:59:40.631360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.529 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.641194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.641247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.529 [2024-11-20 09:59:40.641261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.529 [2024-11-20 09:59:40.641268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.529 [2024-11-20 09:59:40.641275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.529 [2024-11-20 09:59:40.641290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.529 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.651260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.651322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.529 [2024-11-20 09:59:40.651336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.529 [2024-11-20 09:59:40.651344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.529 [2024-11-20 09:59:40.651350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.529 [2024-11-20 09:59:40.651366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.529 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.661251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.661304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.529 [2024-11-20 09:59:40.661319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.529 [2024-11-20 09:59:40.661326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.529 [2024-11-20 09:59:40.661333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.529 [2024-11-20 09:59:40.661349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.529 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.671305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.671362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.529 [2024-11-20 09:59:40.671384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.529 [2024-11-20 09:59:40.671391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.529 [2024-11-20 09:59:40.671398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.529 [2024-11-20 09:59:40.671412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.529 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.681358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.681418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.529 [2024-11-20 09:59:40.681433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.529 [2024-11-20 09:59:40.681440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.529 [2024-11-20 09:59:40.681447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.529 [2024-11-20 09:59:40.681462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.529 qpair failed and we were unable to recover it. 00:27:17.529 [2024-11-20 09:59:40.691373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.529 [2024-11-20 09:59:40.691427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.691441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.691448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.691455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.691471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.701360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.701419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.701434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.701441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.701448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.701463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.711383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.711440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.711454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.711461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.711471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.711487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.721458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.721519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.721534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.721542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.721548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.721564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.731455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.731509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.731523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.731531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.731538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.731553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.741507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.741564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.741578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.741585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.741592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.741608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.751441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.751524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.751539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.751547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.751553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.751569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.761539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.761617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.761632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.761639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.761645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.761661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.771569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.771641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.771656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.771663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.771670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.771685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.781597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.781652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.781666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.781674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.781680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.781695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.791673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.791732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.791748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.791756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.791763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.791778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.801663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.801741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.801755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.801765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.801772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.801788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.811692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.530 [2024-11-20 09:59:40.811748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.530 [2024-11-20 09:59:40.811762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.530 [2024-11-20 09:59:40.811769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.530 [2024-11-20 09:59:40.811776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.530 [2024-11-20 09:59:40.811791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.530 qpair failed and we were unable to recover it. 00:27:17.530 [2024-11-20 09:59:40.821751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.531 [2024-11-20 09:59:40.821821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.531 [2024-11-20 09:59:40.821837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.531 [2024-11-20 09:59:40.821845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.531 [2024-11-20 09:59:40.821853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.531 [2024-11-20 09:59:40.821869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.531 qpair failed and we were unable to recover it. 00:27:17.531 [2024-11-20 09:59:40.831754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.531 [2024-11-20 09:59:40.831812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.531 [2024-11-20 09:59:40.831827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.531 [2024-11-20 09:59:40.831835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.531 [2024-11-20 09:59:40.831842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.531 [2024-11-20 09:59:40.831857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.531 qpair failed and we were unable to recover it. 00:27:17.531 [2024-11-20 09:59:40.841776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.531 [2024-11-20 09:59:40.841856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.531 [2024-11-20 09:59:40.841871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.531 [2024-11-20 09:59:40.841882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.531 [2024-11-20 09:59:40.841888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.531 [2024-11-20 09:59:40.841904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.531 qpair failed and we were unable to recover it. 00:27:17.531 [2024-11-20 09:59:40.851742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.531 [2024-11-20 09:59:40.851826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.531 [2024-11-20 09:59:40.851840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.531 [2024-11-20 09:59:40.851847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.531 [2024-11-20 09:59:40.851853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.531 [2024-11-20 09:59:40.851869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.531 qpair failed and we were unable to recover it. 00:27:17.793 [2024-11-20 09:59:40.861847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.793 [2024-11-20 09:59:40.861904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.793 [2024-11-20 09:59:40.861919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.793 [2024-11-20 09:59:40.861926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.793 [2024-11-20 09:59:40.861933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.793 [2024-11-20 09:59:40.861953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.793 qpair failed and we were unable to recover it. 00:27:17.793 [2024-11-20 09:59:40.871884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.793 [2024-11-20 09:59:40.871959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.793 [2024-11-20 09:59:40.871975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.793 [2024-11-20 09:59:40.871983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.793 [2024-11-20 09:59:40.871990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.793 [2024-11-20 09:59:40.872007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.793 qpair failed and we were unable to recover it. 00:27:17.793 [2024-11-20 09:59:40.881957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.793 [2024-11-20 09:59:40.882046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.793 [2024-11-20 09:59:40.882060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.793 [2024-11-20 09:59:40.882068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.793 [2024-11-20 09:59:40.882074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.793 [2024-11-20 09:59:40.882093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.793 qpair failed and we were unable to recover it. 00:27:17.793 [2024-11-20 09:59:40.891962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.793 [2024-11-20 09:59:40.892022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.793 [2024-11-20 09:59:40.892037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.793 [2024-11-20 09:59:40.892044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.793 [2024-11-20 09:59:40.892050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.793 [2024-11-20 09:59:40.892066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.793 qpair failed and we were unable to recover it. 00:27:17.793 [2024-11-20 09:59:40.901887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.793 [2024-11-20 09:59:40.901938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.793 [2024-11-20 09:59:40.901956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.793 [2024-11-20 09:59:40.901964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.793 [2024-11-20 09:59:40.901970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.793 [2024-11-20 09:59:40.901986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.793 qpair failed and we were unable to recover it. 00:27:17.793 [2024-11-20 09:59:40.911975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.793 [2024-11-20 09:59:40.912030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.793 [2024-11-20 09:59:40.912044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.793 [2024-11-20 09:59:40.912051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.793 [2024-11-20 09:59:40.912057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.793 [2024-11-20 09:59:40.912073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.793 qpair failed and we were unable to recover it. 00:27:17.793 [2024-11-20 09:59:40.922021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.793 [2024-11-20 09:59:40.922084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.793 [2024-11-20 09:59:40.922098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.793 [2024-11-20 09:59:40.922105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.793 [2024-11-20 09:59:40.922112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.793 [2024-11-20 09:59:40.922128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.793 qpair failed and we were unable to recover it. 00:27:17.793 [2024-11-20 09:59:40.931958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.793 [2024-11-20 09:59:40.932017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.793 [2024-11-20 09:59:40.932031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.793 [2024-11-20 09:59:40.932038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.793 [2024-11-20 09:59:40.932044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.793 [2024-11-20 09:59:40.932061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.793 qpair failed and we were unable to recover it. 00:27:17.793 [2024-11-20 09:59:40.942061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.793 [2024-11-20 09:59:40.942118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:40.942131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:40.942139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:40.942146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:40.942161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:40.952113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:40.952167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:40.952181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:40.952188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:40.952195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:40.952210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:40.962161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:40.962245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:40.962260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:40.962267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:40.962275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:40.962290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:40.972133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:40.972189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:40.972203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:40.972214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:40.972220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:40.972236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:40.982152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:40.982207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:40.982221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:40.982228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:40.982234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:40.982249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:40.992156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:40.992213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:40.992227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:40.992235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:40.992242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:40.992258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:41.002234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:41.002293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:41.002307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:41.002314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:41.002321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:41.002336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:41.012271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:41.012324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:41.012338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:41.012345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:41.012352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:41.012371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:41.022293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:41.022355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:41.022368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:41.022376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:41.022382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:41.022399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:41.032379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:41.032488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:41.032503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:41.032512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:41.032519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:41.032534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:41.042289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:41.042349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:41.042362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:41.042369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:41.042375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:41.042391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:41.052341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:41.052399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:41.052412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:41.052420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:41.052427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:41.052442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.794 [2024-11-20 09:59:41.062330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.794 [2024-11-20 09:59:41.062386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.794 [2024-11-20 09:59:41.062400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.794 [2024-11-20 09:59:41.062407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.794 [2024-11-20 09:59:41.062414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.794 [2024-11-20 09:59:41.062429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.794 qpair failed and we were unable to recover it. 00:27:17.795 [2024-11-20 09:59:41.072358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.795 [2024-11-20 09:59:41.072417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.795 [2024-11-20 09:59:41.072431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.795 [2024-11-20 09:59:41.072438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.795 [2024-11-20 09:59:41.072445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.795 [2024-11-20 09:59:41.072460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.795 qpair failed and we were unable to recover it. 00:27:17.795 [2024-11-20 09:59:41.082456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.795 [2024-11-20 09:59:41.082511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.795 [2024-11-20 09:59:41.082525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.795 [2024-11-20 09:59:41.082532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.795 [2024-11-20 09:59:41.082539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.795 [2024-11-20 09:59:41.082554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.795 qpair failed and we were unable to recover it. 00:27:17.795 [2024-11-20 09:59:41.092403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.795 [2024-11-20 09:59:41.092473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.795 [2024-11-20 09:59:41.092487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.795 [2024-11-20 09:59:41.092495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.795 [2024-11-20 09:59:41.092501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.795 [2024-11-20 09:59:41.092517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.795 qpair failed and we were unable to recover it. 00:27:17.795 [2024-11-20 09:59:41.102510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.795 [2024-11-20 09:59:41.102567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.795 [2024-11-20 09:59:41.102584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.795 [2024-11-20 09:59:41.102592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.795 [2024-11-20 09:59:41.102598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.795 [2024-11-20 09:59:41.102614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.795 qpair failed and we were unable to recover it. 00:27:17.795 [2024-11-20 09:59:41.112544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:17.795 [2024-11-20 09:59:41.112599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:17.795 [2024-11-20 09:59:41.112612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:17.795 [2024-11-20 09:59:41.112620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:17.795 [2024-11-20 09:59:41.112626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:17.795 [2024-11-20 09:59:41.112642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:17.795 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.122589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.056 [2024-11-20 09:59:41.122653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.056 [2024-11-20 09:59:41.122667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.056 [2024-11-20 09:59:41.122676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.056 [2024-11-20 09:59:41.122683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.056 [2024-11-20 09:59:41.122699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.132597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.056 [2024-11-20 09:59:41.132650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.056 [2024-11-20 09:59:41.132666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.056 [2024-11-20 09:59:41.132675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.056 [2024-11-20 09:59:41.132682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.056 [2024-11-20 09:59:41.132697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.142618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.056 [2024-11-20 09:59:41.142713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.056 [2024-11-20 09:59:41.142727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.056 [2024-11-20 09:59:41.142734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.056 [2024-11-20 09:59:41.142744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.056 [2024-11-20 09:59:41.142759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.152594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.056 [2024-11-20 09:59:41.152651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.056 [2024-11-20 09:59:41.152664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.056 [2024-11-20 09:59:41.152672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.056 [2024-11-20 09:59:41.152678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.056 [2024-11-20 09:59:41.152694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.162665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.056 [2024-11-20 09:59:41.162760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.056 [2024-11-20 09:59:41.162774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.056 [2024-11-20 09:59:41.162781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.056 [2024-11-20 09:59:41.162787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.056 [2024-11-20 09:59:41.162803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.172640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.056 [2024-11-20 09:59:41.172698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.056 [2024-11-20 09:59:41.172712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.056 [2024-11-20 09:59:41.172719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.056 [2024-11-20 09:59:41.172725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.056 [2024-11-20 09:59:41.172741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.182751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.056 [2024-11-20 09:59:41.182808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.056 [2024-11-20 09:59:41.182821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.056 [2024-11-20 09:59:41.182828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.056 [2024-11-20 09:59:41.182834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.056 [2024-11-20 09:59:41.182849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.192722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.056 [2024-11-20 09:59:41.192823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.056 [2024-11-20 09:59:41.192837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.056 [2024-11-20 09:59:41.192844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.056 [2024-11-20 09:59:41.192851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.056 [2024-11-20 09:59:41.192866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.202779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.056 [2024-11-20 09:59:41.202880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.056 [2024-11-20 09:59:41.202895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.056 [2024-11-20 09:59:41.202902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.056 [2024-11-20 09:59:41.202908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.056 [2024-11-20 09:59:41.202926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 09:59:41.212796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.212865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.212880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.212887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.212893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.212908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.222850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.222901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.222915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.222922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.222929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.222945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.232863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.232926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.232944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.232956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.232963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.232979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.242866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.242965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.242979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.242986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.242993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.243008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.252932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.253037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.253051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.253059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.253065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.253082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.262978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.263028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.263043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.263050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.263057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.263072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.272999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.273054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.273068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.273075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.273086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.273101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.283047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.283105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.283119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.283126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.283132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.283148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.293088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.293141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.293156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.293163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.293170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.293186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.303096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.303155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.303170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.303178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.303185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.303202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.313192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.313258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.313272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.313280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.313286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.313302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.323165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.057 [2024-11-20 09:59:41.323222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.057 [2024-11-20 09:59:41.323236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.057 [2024-11-20 09:59:41.323243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.057 [2024-11-20 09:59:41.323250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.057 [2024-11-20 09:59:41.323266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 09:59:41.333211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.058 [2024-11-20 09:59:41.333260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.058 [2024-11-20 09:59:41.333274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.058 [2024-11-20 09:59:41.333281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.058 [2024-11-20 09:59:41.333287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.058 [2024-11-20 09:59:41.333301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 09:59:41.343248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.058 [2024-11-20 09:59:41.343299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.058 [2024-11-20 09:59:41.343313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.058 [2024-11-20 09:59:41.343320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.058 [2024-11-20 09:59:41.343327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.058 [2024-11-20 09:59:41.343343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 09:59:41.353260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.058 [2024-11-20 09:59:41.353317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.058 [2024-11-20 09:59:41.353330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.058 [2024-11-20 09:59:41.353337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.058 [2024-11-20 09:59:41.353344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.058 [2024-11-20 09:59:41.353359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 09:59:41.363313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.058 [2024-11-20 09:59:41.363374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.058 [2024-11-20 09:59:41.363389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.058 [2024-11-20 09:59:41.363396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.058 [2024-11-20 09:59:41.363403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.058 [2024-11-20 09:59:41.363418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 09:59:41.373305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.058 [2024-11-20 09:59:41.373360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.058 [2024-11-20 09:59:41.373374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.058 [2024-11-20 09:59:41.373381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.058 [2024-11-20 09:59:41.373388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.058 [2024-11-20 09:59:41.373403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 09:59:41.383295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.058 [2024-11-20 09:59:41.383361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.058 [2024-11-20 09:59:41.383379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.058 [2024-11-20 09:59:41.383390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.058 [2024-11-20 09:59:41.383397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.058 [2024-11-20 09:59:41.383412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.318 [2024-11-20 09:59:41.393296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.318 [2024-11-20 09:59:41.393370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.318 [2024-11-20 09:59:41.393384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.318 [2024-11-20 09:59:41.393392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.318 [2024-11-20 09:59:41.393400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.318 [2024-11-20 09:59:41.393415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.318 qpair failed and we were unable to recover it. 00:27:18.318 [2024-11-20 09:59:41.403410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.318 [2024-11-20 09:59:41.403466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.318 [2024-11-20 09:59:41.403480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.318 [2024-11-20 09:59:41.403490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.318 [2024-11-20 09:59:41.403497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.318 [2024-11-20 09:59:41.403513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.318 qpair failed and we were unable to recover it. 00:27:18.318 [2024-11-20 09:59:41.413421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.318 [2024-11-20 09:59:41.413475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.318 [2024-11-20 09:59:41.413489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.318 [2024-11-20 09:59:41.413496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.318 [2024-11-20 09:59:41.413503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.318 [2024-11-20 09:59:41.413518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.318 qpair failed and we were unable to recover it. 00:27:18.318 [2024-11-20 09:59:41.423441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.318 [2024-11-20 09:59:41.423497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.318 [2024-11-20 09:59:41.423511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.318 [2024-11-20 09:59:41.423518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.318 [2024-11-20 09:59:41.423525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.318 [2024-11-20 09:59:41.423540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.318 qpair failed and we were unable to recover it. 00:27:18.318 [2024-11-20 09:59:41.433473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.318 [2024-11-20 09:59:41.433529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.433543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.433550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.433557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.433572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.443500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.443553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.443568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.443576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.443582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.443606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.453527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.453583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.453597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.453604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.453610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.453626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.463540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.463596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.463610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.463618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.463625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.463640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.473587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.473643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.473657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.473664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.473670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.473686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.483610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.483667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.483681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.483689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.483695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.483711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.493684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.493775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.493789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.493796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.493802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.493818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.503667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.503726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.503740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.503748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.503755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.503770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.513707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.513762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.513776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.513783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.513790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.513805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.523786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.523891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.319 [2024-11-20 09:59:41.523905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.319 [2024-11-20 09:59:41.523913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.319 [2024-11-20 09:59:41.523919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.319 [2024-11-20 09:59:41.523935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.319 qpair failed and we were unable to recover it. 00:27:18.319 [2024-11-20 09:59:41.533715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.319 [2024-11-20 09:59:41.533803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.533821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.533828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.533834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.533849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.543775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.543827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.543843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.543850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.543857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.543874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.553876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.553928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.553942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.553953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.553960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.553976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.563863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.563926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.563940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.563952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.563959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.563975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.573865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.573916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.573929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.573936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.573943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.573965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.583889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.583942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.583961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.583969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.583975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.583991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.593865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.593956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.593973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.593980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.593987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.594004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.603999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.604078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.604093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.604100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.604107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.604122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.613986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.614044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.614059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.614066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.614073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.614089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.624007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.624073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.624088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.624096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.624102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.624118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.634104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.320 [2024-11-20 09:59:41.634159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.320 [2024-11-20 09:59:41.634173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.320 [2024-11-20 09:59:41.634180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.320 [2024-11-20 09:59:41.634187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.320 [2024-11-20 09:59:41.634202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.320 qpair failed and we were unable to recover it. 00:27:18.320 [2024-11-20 09:59:41.644068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.321 [2024-11-20 09:59:41.644126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.321 [2024-11-20 09:59:41.644140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.321 [2024-11-20 09:59:41.644148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.321 [2024-11-20 09:59:41.644155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.321 [2024-11-20 09:59:41.644170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.321 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 09:59:41.654070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.582 [2024-11-20 09:59:41.654129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.582 [2024-11-20 09:59:41.654144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.582 [2024-11-20 09:59:41.654152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.582 [2024-11-20 09:59:41.654158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.582 [2024-11-20 09:59:41.654174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 09:59:41.664168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.582 [2024-11-20 09:59:41.664230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.582 [2024-11-20 09:59:41.664248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.582 [2024-11-20 09:59:41.664255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.582 [2024-11-20 09:59:41.664261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.582 [2024-11-20 09:59:41.664277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 09:59:41.674175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.582 [2024-11-20 09:59:41.674234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.582 [2024-11-20 09:59:41.674250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.582 [2024-11-20 09:59:41.674257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.582 [2024-11-20 09:59:41.674264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.582 [2024-11-20 09:59:41.674280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 09:59:41.684204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.582 [2024-11-20 09:59:41.684262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.582 [2024-11-20 09:59:41.684277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.582 [2024-11-20 09:59:41.684285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.582 [2024-11-20 09:59:41.684292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.582 [2024-11-20 09:59:41.684308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 09:59:41.694284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.582 [2024-11-20 09:59:41.694335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.582 [2024-11-20 09:59:41.694349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.582 [2024-11-20 09:59:41.694356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.582 [2024-11-20 09:59:41.694363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.582 [2024-11-20 09:59:41.694379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.582 qpair failed and we were unable to recover it. 00:27:18.582 [2024-11-20 09:59:41.704249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.704303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.704316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.704323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.704333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.704349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.714264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.714358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.714371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.714379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.714385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.714400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.724307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.724359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.724373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.724380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.724386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.724401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.734331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.734437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.734451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.734458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.734465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.734481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.744404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.744456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.744471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.744478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.744485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.744501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.754387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.754460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.754474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.754481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.754487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.754502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.764419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.764473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.764486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.764493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.764500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.764515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.774468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.774528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.774542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.774550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.774556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.774571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.784471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.784525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.784540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.784547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.784553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.784570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.794478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.794545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.794563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.794570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.794576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.794591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.804530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.804599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.804614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.804621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.804628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.804643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.814562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.814625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.814639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.814647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.814653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.814668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.583 [2024-11-20 09:59:41.824578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.583 [2024-11-20 09:59:41.824630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.583 [2024-11-20 09:59:41.824644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.583 [2024-11-20 09:59:41.824651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.583 [2024-11-20 09:59:41.824658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.583 [2024-11-20 09:59:41.824674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.583 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 09:59:41.834621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.584 [2024-11-20 09:59:41.834690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.584 [2024-11-20 09:59:41.834703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.584 [2024-11-20 09:59:41.834714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.584 [2024-11-20 09:59:41.834720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.584 [2024-11-20 09:59:41.834736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 09:59:41.844639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.584 [2024-11-20 09:59:41.844700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.584 [2024-11-20 09:59:41.844714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.584 [2024-11-20 09:59:41.844722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.584 [2024-11-20 09:59:41.844728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.584 [2024-11-20 09:59:41.844743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 09:59:41.854676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.584 [2024-11-20 09:59:41.854752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.584 [2024-11-20 09:59:41.854766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.584 [2024-11-20 09:59:41.854774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.584 [2024-11-20 09:59:41.854780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.584 [2024-11-20 09:59:41.854795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 09:59:41.864606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.584 [2024-11-20 09:59:41.864668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.584 [2024-11-20 09:59:41.864683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.584 [2024-11-20 09:59:41.864690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.584 [2024-11-20 09:59:41.864697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.584 [2024-11-20 09:59:41.864712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 09:59:41.874671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.584 [2024-11-20 09:59:41.874731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.584 [2024-11-20 09:59:41.874745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.584 [2024-11-20 09:59:41.874752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.584 [2024-11-20 09:59:41.874759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.584 [2024-11-20 09:59:41.874774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 09:59:41.884701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.584 [2024-11-20 09:59:41.884796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.584 [2024-11-20 09:59:41.884812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.584 [2024-11-20 09:59:41.884820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.584 [2024-11-20 09:59:41.884827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.584 [2024-11-20 09:59:41.884843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 09:59:41.894776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.584 [2024-11-20 09:59:41.894831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.584 [2024-11-20 09:59:41.894845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.584 [2024-11-20 09:59:41.894852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.584 [2024-11-20 09:59:41.894859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.584 [2024-11-20 09:59:41.894874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.584 [2024-11-20 09:59:41.904812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.584 [2024-11-20 09:59:41.904878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.584 [2024-11-20 09:59:41.904893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.584 [2024-11-20 09:59:41.904900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.584 [2024-11-20 09:59:41.904906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.584 [2024-11-20 09:59:41.904922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.584 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:41.914888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:41.914952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:41.914967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:41.914975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:41.914982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:41.914998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:41.924899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:41.924962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:41.924977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:41.924985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:41.924991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:41.925007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:41.934937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:41.935036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:41.935051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:41.935059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:41.935065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:41.935081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:41.944903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:41.944971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:41.944986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:41.944994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:41.945000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:41.945016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:41.954999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:41.955057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:41.955071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:41.955078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:41.955084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:41.955100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:41.964955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:41.965007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:41.965023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:41.965033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:41.965040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:41.965057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:41.975004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:41.975071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:41.975085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:41.975093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:41.975099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:41.975114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:41.985026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:41.985078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:41.985093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:41.985100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:41.985106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:41.985122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:41.995118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:41.995177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:41.995193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:41.995200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:41.995208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:41.995224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:42.005093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:42.005152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:42.005166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:42.005174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:42.005180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:42.005199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:42.015118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:42.015175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:42.015188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:42.015195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:42.015202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:42.015217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:42.025157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:42.025212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:42.025227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.847 [2024-11-20 09:59:42.025234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.847 [2024-11-20 09:59:42.025241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.847 [2024-11-20 09:59:42.025256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.847 qpair failed and we were unable to recover it. 00:27:18.847 [2024-11-20 09:59:42.035154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.847 [2024-11-20 09:59:42.035209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.847 [2024-11-20 09:59:42.035222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.035229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.035236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.035252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.045215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.045274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.045289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.045296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.045303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.045318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.055286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.055350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.055363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.055370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.055377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.055392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.065282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.065352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.065365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.065373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.065379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.065394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.075290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.075372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.075387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.075395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.075401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.075417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.085256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.085321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.085335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.085343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.085349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.085366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.095408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.095464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.095481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.095488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.095495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.095511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.105385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.105444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.105459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.105466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.105473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.105489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.115423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.115477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.115491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.115498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.115505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.115520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.125472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.125580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.125595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.125602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.125609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.125625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.135451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.135507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.135521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.135529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.135535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.135553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.145477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.145534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.145548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.145556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.145562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.145578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.155447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.155503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.155517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.848 [2024-11-20 09:59:42.155525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.848 [2024-11-20 09:59:42.155532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.848 [2024-11-20 09:59:42.155547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.848 qpair failed and we were unable to recover it. 00:27:18.848 [2024-11-20 09:59:42.165522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:18.848 [2024-11-20 09:59:42.165580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:18.848 [2024-11-20 09:59:42.165594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:18.849 [2024-11-20 09:59:42.165601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:18.849 [2024-11-20 09:59:42.165608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:18.849 [2024-11-20 09:59:42.165624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:18.849 qpair failed and we were unable to recover it. 00:27:19.110 [2024-11-20 09:59:42.175571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.110 [2024-11-20 09:59:42.175631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.110 [2024-11-20 09:59:42.175645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.110 [2024-11-20 09:59:42.175652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.110 [2024-11-20 09:59:42.175659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.110 [2024-11-20 09:59:42.175674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.110 qpair failed and we were unable to recover it. 00:27:19.110 [2024-11-20 09:59:42.185593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.110 [2024-11-20 09:59:42.185673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.110 [2024-11-20 09:59:42.185688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.110 [2024-11-20 09:59:42.185696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.110 [2024-11-20 09:59:42.185702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.110 [2024-11-20 09:59:42.185718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.110 qpair failed and we were unable to recover it. 00:27:19.110 [2024-11-20 09:59:42.195624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.110 [2024-11-20 09:59:42.195684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.110 [2024-11-20 09:59:42.195699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.110 [2024-11-20 09:59:42.195706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.110 [2024-11-20 09:59:42.195712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.110 [2024-11-20 09:59:42.195728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.110 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.205644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.205697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.205711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.205718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.111 [2024-11-20 09:59:42.205724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.111 [2024-11-20 09:59:42.205740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.111 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.215710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.215770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.215784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.215792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.111 [2024-11-20 09:59:42.215798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.111 [2024-11-20 09:59:42.215813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.111 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.225693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.225752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.225773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.225781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.111 [2024-11-20 09:59:42.225787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.111 [2024-11-20 09:59:42.225802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.111 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.235748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.235813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.235828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.235839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.111 [2024-11-20 09:59:42.235845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.111 [2024-11-20 09:59:42.235860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.111 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.245760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.245839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.245855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.245862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.111 [2024-11-20 09:59:42.245868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.111 [2024-11-20 09:59:42.245884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.111 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.255833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.255892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.255906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.255914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.111 [2024-11-20 09:59:42.255922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.111 [2024-11-20 09:59:42.255937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.111 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.265817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.265871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.265886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.265893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.111 [2024-11-20 09:59:42.265903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.111 [2024-11-20 09:59:42.265919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.111 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.275906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.275968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.275982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.275989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.111 [2024-11-20 09:59:42.275996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.111 [2024-11-20 09:59:42.276011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.111 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.285883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.285942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.285961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.285968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.111 [2024-11-20 09:59:42.285975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.111 [2024-11-20 09:59:42.285993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.111 qpair failed and we were unable to recover it. 00:27:19.111 [2024-11-20 09:59:42.295904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.111 [2024-11-20 09:59:42.295964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.111 [2024-11-20 09:59:42.295978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.111 [2024-11-20 09:59:42.295985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.295992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.296008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.305864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.305921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.305936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.305943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.305956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.305973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.315892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.315954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.315968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.315976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.315983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.315998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.325988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.326069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.326083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.326090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.326097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.326113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.335999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.336054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.336069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.336077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.336084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.336101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.345975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.346027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.346041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.346048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.346054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.346070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.356148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.356229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.356247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.356254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.356260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.356276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.366100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.366160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.366175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.366183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.366190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.366206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.376153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.376209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.376223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.376230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.376237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.376252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.386217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.386271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.386286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.386293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.386300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.386317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.112 [2024-11-20 09:59:42.396206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.112 [2024-11-20 09:59:42.396264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.112 [2024-11-20 09:59:42.396278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.112 [2024-11-20 09:59:42.396288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.112 [2024-11-20 09:59:42.396295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.112 [2024-11-20 09:59:42.396311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.112 qpair failed and we were unable to recover it. 00:27:19.113 [2024-11-20 09:59:42.406178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.113 [2024-11-20 09:59:42.406235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.113 [2024-11-20 09:59:42.406249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.113 [2024-11-20 09:59:42.406256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.113 [2024-11-20 09:59:42.406262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.113 [2024-11-20 09:59:42.406278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.113 qpair failed and we were unable to recover it. 00:27:19.113 [2024-11-20 09:59:42.416269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.113 [2024-11-20 09:59:42.416360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.113 [2024-11-20 09:59:42.416376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.113 [2024-11-20 09:59:42.416383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.113 [2024-11-20 09:59:42.416390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.113 [2024-11-20 09:59:42.416406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.113 qpair failed and we were unable to recover it. 00:27:19.113 [2024-11-20 09:59:42.426261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.113 [2024-11-20 09:59:42.426321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.113 [2024-11-20 09:59:42.426335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.113 [2024-11-20 09:59:42.426343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.113 [2024-11-20 09:59:42.426349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.113 [2024-11-20 09:59:42.426364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.113 qpair failed and we were unable to recover it. 00:27:19.113 [2024-11-20 09:59:42.436255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.113 [2024-11-20 09:59:42.436311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.113 [2024-11-20 09:59:42.436325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.113 [2024-11-20 09:59:42.436332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.113 [2024-11-20 09:59:42.436339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.113 [2024-11-20 09:59:42.436354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.113 qpair failed and we were unable to recover it. 00:27:19.373 [2024-11-20 09:59:42.446347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.373 [2024-11-20 09:59:42.446433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.373 [2024-11-20 09:59:42.446447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.373 [2024-11-20 09:59:42.446454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.373 [2024-11-20 09:59:42.446461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.373 [2024-11-20 09:59:42.446475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.373 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.456373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.456430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.456445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.456452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.456458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.456473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.466391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.466444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.466458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.466465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.466472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.466487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.476419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.476494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.476508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.476515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.476521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.476535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.486449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.486510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.486525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.486533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.486539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.486555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.496462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.496518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.496532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.496539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.496546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.496562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.506463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.506541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.506555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.506563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.506569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.506584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.516522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.516590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.516603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.516611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.516618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.516633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.526577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.526634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.526648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.526658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.526664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.526680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.536592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.536648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.536662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.536669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.536676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.536692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.546642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.546705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.546719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.546727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.546733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.546749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.556686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.556755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.556771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.556779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.556786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.556801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.566669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.566750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.566765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.374 [2024-11-20 09:59:42.566772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.374 [2024-11-20 09:59:42.566778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.374 [2024-11-20 09:59:42.566797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.374 qpair failed and we were unable to recover it. 00:27:19.374 [2024-11-20 09:59:42.576719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.374 [2024-11-20 09:59:42.576776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.374 [2024-11-20 09:59:42.576791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.576798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.576805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.576821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.586776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.586837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.586853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.586861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.586868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.586885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.596769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.596828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.596844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.596852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.596858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.596874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.606806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.606861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.606875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.606883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.606890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.606907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.616830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.616882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.616897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.616904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.616911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.616926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.626830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.626883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.626898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.626905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.626911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.626926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.636872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.636930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.636944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.636955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.636962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.636978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.646957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.647058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.647073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.647081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.647087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.647103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.656931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.656989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.657007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.657016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.657022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.657037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.666969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.667032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.667046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.667054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.667060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.667075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.677007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.677063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.677077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.677084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.677091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.677106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.687077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.687136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.687151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.687160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.687166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.687181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.375 [2024-11-20 09:59:42.697081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.375 [2024-11-20 09:59:42.697138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.375 [2024-11-20 09:59:42.697152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.375 [2024-11-20 09:59:42.697159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.375 [2024-11-20 09:59:42.697170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.375 [2024-11-20 09:59:42.697186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.375 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.707134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.707194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.707208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.707216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.707223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.707238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.717155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.717220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.717235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.717242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.717249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.717264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.727156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.727208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.727222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.727230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.727237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.727252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.737181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.737233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.737247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.737254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.737261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.737276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.747206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.747261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.747275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.747282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.747289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.747304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.757244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.757314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.757330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.757337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.757343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.757359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.767238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.767300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.767313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.767320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.767327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.767342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.777341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.777399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.777415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.777422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.777430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.777445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.787388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.787466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.787484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.787491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.787498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.787514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.797357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.797428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.636 [2024-11-20 09:59:42.797442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.636 [2024-11-20 09:59:42.797449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.636 [2024-11-20 09:59:42.797455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.636 [2024-11-20 09:59:42.797471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.636 qpair failed and we were unable to recover it. 00:27:19.636 [2024-11-20 09:59:42.807309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.636 [2024-11-20 09:59:42.807367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.807381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.807388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.807396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.807410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.817406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.817456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.817470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.817477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.817484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.817500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.827431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.827487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.827500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.827507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.827517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.827532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.837475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.837530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.837543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.837551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.837557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.837573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.847500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.847557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.847572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.847579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.847587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.847601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.857526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.857581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.857595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.857603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.857609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.857624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.867552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.867632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.867646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.867654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.867660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.867675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.877577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.877649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.877663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.877670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.877677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.877692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.887586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.887641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.887655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.887662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.887669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.887685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.897642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.897698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.897712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.897719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.897725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.897740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.907711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.907769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.907783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.907790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.907796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.907812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.917706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.917762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.917783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.917790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.917797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.917812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.927716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.927783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.637 [2024-11-20 09:59:42.927797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.637 [2024-11-20 09:59:42.927805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.637 [2024-11-20 09:59:42.927811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.637 [2024-11-20 09:59:42.927827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.637 qpair failed and we were unable to recover it. 00:27:19.637 [2024-11-20 09:59:42.937744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.637 [2024-11-20 09:59:42.937801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.638 [2024-11-20 09:59:42.937815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.638 [2024-11-20 09:59:42.937823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.638 [2024-11-20 09:59:42.937829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.638 [2024-11-20 09:59:42.937845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-11-20 09:59:42.947772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.638 [2024-11-20 09:59:42.947830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.638 [2024-11-20 09:59:42.947843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.638 [2024-11-20 09:59:42.947851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.638 [2024-11-20 09:59:42.947858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.638 [2024-11-20 09:59:42.947873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.638 [2024-11-20 09:59:42.957819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.638 [2024-11-20 09:59:42.957883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.638 [2024-11-20 09:59:42.957897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.638 [2024-11-20 09:59:42.957908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.638 [2024-11-20 09:59:42.957915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.638 [2024-11-20 09:59:42.957930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.638 qpair failed and we were unable to recover it. 00:27:19.951 [2024-11-20 09:59:42.967829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.951 [2024-11-20 09:59:42.967914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:42.967929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:42.967936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:42.967943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:42.967964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:42.977861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:42.977921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:42.977936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:42.977944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:42.977955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:42.977971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:42.987843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:42.987902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:42.987918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:42.987925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:42.987932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:42.987952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:42.997925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:42.997999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:42.998014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:42.998021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:42.998027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:42.998043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.007934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:43.007995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:43.008010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:43.008017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:43.008024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:43.008039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.017975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:43.018030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:43.018044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:43.018051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:43.018058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:43.018073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.027990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:43.028044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:43.028058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:43.028065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:43.028071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:43.028086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.038070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:43.038178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:43.038192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:43.038199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:43.038206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:43.038223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.048107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:43.048220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:43.048235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:43.048242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:43.048250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:43.048265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.058086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:43.058141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:43.058155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:43.058163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:43.058170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:43.058187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.068114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:43.068172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:43.068186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:43.068193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:43.068199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:43.068215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.078192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:43.078252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:43.078266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:43.078273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:43.078280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:43.078295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.088206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.952 [2024-11-20 09:59:43.088262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.952 [2024-11-20 09:59:43.088276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.952 [2024-11-20 09:59:43.088288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.952 [2024-11-20 09:59:43.088295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.952 [2024-11-20 09:59:43.088310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.952 qpair failed and we were unable to recover it. 00:27:19.952 [2024-11-20 09:59:43.098190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.098248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.098262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.098270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.098277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.098292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.108226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.108283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.108297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.108304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.108311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.108326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.118252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.118313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.118328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.118337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.118344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.118360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.128317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.128372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.128386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.128394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.128400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.128419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.138300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.138354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.138367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.138374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.138382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.138397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.148333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.148427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.148441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.148448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.148455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.148470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.158368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.158424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.158438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.158447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.158454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.158469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.168434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.168494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.168508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.168516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.168523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.168539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.178451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.178513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.178527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.178535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.178542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.178557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.188416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.188511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.188526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.188534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.188540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.188556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.198484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.198545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.198559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.198567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.198573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.198589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.208444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.208534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.208550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.208557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.208563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.208579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.218534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.953 [2024-11-20 09:59:43.218588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.953 [2024-11-20 09:59:43.218605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.953 [2024-11-20 09:59:43.218613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.953 [2024-11-20 09:59:43.218619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.953 [2024-11-20 09:59:43.218635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.953 qpair failed and we were unable to recover it. 00:27:19.953 [2024-11-20 09:59:43.228559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.954 [2024-11-20 09:59:43.228616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.954 [2024-11-20 09:59:43.228630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.954 [2024-11-20 09:59:43.228638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.954 [2024-11-20 09:59:43.228645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.954 [2024-11-20 09:59:43.228660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.954 qpair failed and we were unable to recover it. 00:27:19.954 [2024-11-20 09:59:43.238624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.954 [2024-11-20 09:59:43.238678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.954 [2024-11-20 09:59:43.238692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.954 [2024-11-20 09:59:43.238699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.954 [2024-11-20 09:59:43.238706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.954 [2024-11-20 09:59:43.238722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.954 qpair failed and we were unable to recover it. 00:27:19.954 [2024-11-20 09:59:43.248591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.954 [2024-11-20 09:59:43.248679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.954 [2024-11-20 09:59:43.248695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.954 [2024-11-20 09:59:43.248703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.954 [2024-11-20 09:59:43.248710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.954 [2024-11-20 09:59:43.248726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.954 qpair failed and we were unable to recover it. 00:27:19.954 [2024-11-20 09:59:43.258647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.954 [2024-11-20 09:59:43.258714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.954 [2024-11-20 09:59:43.258728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.954 [2024-11-20 09:59:43.258736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.954 [2024-11-20 09:59:43.258745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.954 [2024-11-20 09:59:43.258761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.954 qpair failed and we were unable to recover it. 00:27:19.954 [2024-11-20 09:59:43.268674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:19.954 [2024-11-20 09:59:43.268728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:19.954 [2024-11-20 09:59:43.268741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:19.954 [2024-11-20 09:59:43.268748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:19.954 [2024-11-20 09:59:43.268755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:19.954 [2024-11-20 09:59:43.268771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:19.954 qpair failed and we were unable to recover it. 00:27:20.255 [2024-11-20 09:59:43.278655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.255 [2024-11-20 09:59:43.278726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.255 [2024-11-20 09:59:43.278741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.255 [2024-11-20 09:59:43.278749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.255 [2024-11-20 09:59:43.278755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.255 [2024-11-20 09:59:43.278770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.255 qpair failed and we were unable to recover it. 00:27:20.255 [2024-11-20 09:59:43.288767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.255 [2024-11-20 09:59:43.288828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.255 [2024-11-20 09:59:43.288845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.255 [2024-11-20 09:59:43.288853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.255 [2024-11-20 09:59:43.288860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.255 [2024-11-20 09:59:43.288876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.255 qpair failed and we were unable to recover it. 00:27:20.255 [2024-11-20 09:59:43.298815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.255 [2024-11-20 09:59:43.298869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.255 [2024-11-20 09:59:43.298884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.255 [2024-11-20 09:59:43.298891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.255 [2024-11-20 09:59:43.298898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.255 [2024-11-20 09:59:43.298914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.255 qpair failed and we were unable to recover it. 00:27:20.255 [2024-11-20 09:59:43.308796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.255 [2024-11-20 09:59:43.308851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.255 [2024-11-20 09:59:43.308865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.308872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.308879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.308895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.318837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.318894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.318908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.318915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.318922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.318937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.328855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.328912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.328925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.328933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.328940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.328959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.338858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.338926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.338940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.338950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.338957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.338973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.348942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.348999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.349016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.349024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.349030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.349045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.358954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.359025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.359040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.359047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.359053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.359069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.368973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.369054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.369069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.369076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.369082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.369097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.379027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.379088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.379102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.379110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.379116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.379132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.389024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.389077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.389092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.389099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.389109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.389126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.399034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.399141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.399155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.399163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.399171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.399187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.409086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.409142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.409156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.409163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.409169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.409185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.419131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.419186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.256 [2024-11-20 09:59:43.419199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.256 [2024-11-20 09:59:43.419208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.256 [2024-11-20 09:59:43.419216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.256 [2024-11-20 09:59:43.419232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.256 qpair failed and we were unable to recover it. 00:27:20.256 [2024-11-20 09:59:43.429087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.256 [2024-11-20 09:59:43.429145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.429158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.429166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.429172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.429187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.439170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.439226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.439239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.439247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.439253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.439268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.449218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.449277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.449290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.449298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.449304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.449320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.459275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.459340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.459355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.459362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.459368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.459384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.469257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.469310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.469324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.469331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.469338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.469353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.479233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.479291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.479308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.479315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.479320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.479336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.489318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.489376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.489391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.489399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.489406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.489422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.499340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.499393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.499407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.499415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.499421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.499437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.509341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.509394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.509408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.509415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.509421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.509437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.519414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.519467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.519480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.519491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.519498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.519513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.529428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.529529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.529543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.529550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.529556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.529572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.539452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.539506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.539519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.539526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.539533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.539549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.549473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.549528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.257 [2024-11-20 09:59:43.549541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.257 [2024-11-20 09:59:43.549549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.257 [2024-11-20 09:59:43.549555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.257 [2024-11-20 09:59:43.549570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.257 qpair failed and we were unable to recover it. 00:27:20.257 [2024-11-20 09:59:43.559514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.257 [2024-11-20 09:59:43.559570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.258 [2024-11-20 09:59:43.559583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.258 [2024-11-20 09:59:43.559591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.258 [2024-11-20 09:59:43.559597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.258 [2024-11-20 09:59:43.559613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.258 qpair failed and we were unable to recover it. 00:27:20.525 [2024-11-20 09:59:43.569500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.525 [2024-11-20 09:59:43.569557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.525 [2024-11-20 09:59:43.569571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.525 [2024-11-20 09:59:43.569578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.525 [2024-11-20 09:59:43.569584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.525 [2024-11-20 09:59:43.569599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.525 qpair failed and we were unable to recover it. 00:27:20.525 [2024-11-20 09:59:43.579575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.525 [2024-11-20 09:59:43.579649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.525 [2024-11-20 09:59:43.579663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.525 [2024-11-20 09:59:43.579670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.525 [2024-11-20 09:59:43.579676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.525 [2024-11-20 09:59:43.579691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.525 qpair failed and we were unable to recover it. 00:27:20.525 [2024-11-20 09:59:43.589599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.525 [2024-11-20 09:59:43.589648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.525 [2024-11-20 09:59:43.589665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.525 [2024-11-20 09:59:43.589674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.525 [2024-11-20 09:59:43.589681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.525 [2024-11-20 09:59:43.589697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.599647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.599718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.599732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.599740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.599746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.599761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.609664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.609728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.609742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.609750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.609756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.609771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.619671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.619774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.619789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.619796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.619803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.619819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.629716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.629769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.629783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.629790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.629797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.629813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.639767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.639823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.639837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.639845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.639851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.639867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.649812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.649875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.649889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.649903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.649909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.649925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.659814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.659872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.659886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.659894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.659900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.659916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.669832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.669888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.669902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.669909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.669916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.669931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.679869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.679926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.679940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.679952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.679959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.679975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.689960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.690023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.690038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.690047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.690053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.690072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.699887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.699985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.699999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.700006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.700013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.700029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.710002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.710056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.710070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.710077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.710084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.710100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.526 qpair failed and we were unable to recover it. 00:27:20.526 [2024-11-20 09:59:43.719987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.526 [2024-11-20 09:59:43.720055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.526 [2024-11-20 09:59:43.720069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.526 [2024-11-20 09:59:43.720077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.526 [2024-11-20 09:59:43.720084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.526 [2024-11-20 09:59:43.720100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.729990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.730057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.730072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.730079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.730085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.730100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.740048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.740103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.740118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.740125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.740132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.740148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.749997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.750055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.750069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.750077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.750083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.750099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.760109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.760168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.760183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.760190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.760196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.760212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.770191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.770300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.770314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.770322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.770330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.770345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.780180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.780232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.780249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.780256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.780263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.780277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.790251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.790338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.790353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.790360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.790367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.790383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.800215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.800271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.800286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.800293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.800300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.800315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.810225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.810289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.810302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.810310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.810316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.810332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.820295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.820349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.820363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.820371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.820381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.820396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.830326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.830412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.830425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.830432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.830438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.830452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.840286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.840360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.840373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.840380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.840387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.840402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.527 [2024-11-20 09:59:43.850329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.527 [2024-11-20 09:59:43.850416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.527 [2024-11-20 09:59:43.850431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.527 [2024-11-20 09:59:43.850438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.527 [2024-11-20 09:59:43.850445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.527 [2024-11-20 09:59:43.850461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.527 qpair failed and we were unable to recover it. 00:27:20.787 [2024-11-20 09:59:43.860350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.787 [2024-11-20 09:59:43.860436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.787 [2024-11-20 09:59:43.860450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.787 [2024-11-20 09:59:43.860457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.787 [2024-11-20 09:59:43.860464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.787 [2024-11-20 09:59:43.860479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.787 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.870355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.870408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.870422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.870429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.870436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.870452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.880452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.880540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.880554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.880561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.880567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.880582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.890513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.890578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.890600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.890607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.890614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.890630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.900506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.900578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.900592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.900600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.900607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.900624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.910454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.910516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.910535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.910543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.910549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.910565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.920490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.920548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.920562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.920570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.920577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.920594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.930537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.930595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.930609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.930616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.930622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.930637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.940628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.940711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.940726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.940734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.940741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.940756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.950665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.950764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.950778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.950785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.950794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.950810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.960612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.960669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.960682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.960689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.960696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.960712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.970699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.970751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.970765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.970772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.970779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.970794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.980653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.980706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.980720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.980727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.788 [2024-11-20 09:59:43.980734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.788 [2024-11-20 09:59:43.980749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.788 qpair failed and we were unable to recover it. 00:27:20.788 [2024-11-20 09:59:43.990748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.788 [2024-11-20 09:59:43.990803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.788 [2024-11-20 09:59:43.990818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.788 [2024-11-20 09:59:43.990825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:43.990831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:43.990848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.000786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.000844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.000859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.000867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.000874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.000890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.010789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.010846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.010860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.010867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.010874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.010889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.020835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.020889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.020903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.020910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.020917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.020932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.030803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.030858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.030871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.030879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.030886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.030901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.040912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.040976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.040994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.041002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.041008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.041024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.050900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.050959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.050973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.050980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.050987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.051002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.060919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.060980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.060994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.061001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.061008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.061024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.070995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.071052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.071067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.071075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.071082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.071098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.081027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.081081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.081095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.081105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.081112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.081127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.091063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.091117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.091132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.091140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.091146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.091162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.101043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:20.789 [2024-11-20 09:59:44.101100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:20.789 [2024-11-20 09:59:44.101114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:20.789 [2024-11-20 09:59:44.101122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:20.789 [2024-11-20 09:59:44.101128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7ba8000b90 00:27:20.789 [2024-11-20 09:59:44.101143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:20.789 qpair failed and we were unable to recover it. 00:27:20.789 [2024-11-20 09:59:44.101243] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:20.789 A controller has encountered a failure and is being reset. 00:27:21.049 Controller properly reset. 00:27:21.049 Initializing NVMe Controllers 00:27:21.049 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:21.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:21.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:21.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:21.049 Initialization complete. Launching workers. 00:27:21.049 Starting thread on core 1 00:27:21.049 Starting thread on core 2 00:27:21.049 Starting thread on core 3 00:27:21.049 Starting thread on core 0 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:21.049 00:27:21.049 real 0m10.790s 00:27:21.049 user 0m19.613s 00:27:21.049 sys 0m4.634s 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.049 ************************************ 00:27:21.049 END TEST nvmf_target_disconnect_tc2 00:27:21.049 ************************************ 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.049 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.049 rmmod nvme_tcp 00:27:21.049 rmmod nvme_fabrics 00:27:21.049 rmmod nvme_keyring 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3072868 ']' 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3072868 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3072868 ']' 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3072868 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3072868 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3072868' 00:27:21.308 killing process with pid 3072868 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3072868 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3072868 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:21.308 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.309 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.309 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.309 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.309 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.309 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.309 09:59:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.846 09:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.847 00:27:23.847 real 0m19.556s 00:27:23.847 user 0m47.217s 00:27:23.847 sys 0m9.553s 00:27:23.847 09:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.847 09:59:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.847 ************************************ 00:27:23.847 END TEST nvmf_target_disconnect 00:27:23.847 ************************************ 00:27:23.847 09:59:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:23.847 00:27:23.847 real 5m52.265s 00:27:23.847 user 10m36.254s 00:27:23.847 sys 1m58.233s 00:27:23.847 09:59:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.847 09:59:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.847 ************************************ 00:27:23.847 END TEST nvmf_host 00:27:23.847 ************************************ 00:27:23.847 09:59:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:23.847 09:59:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:23.847 09:59:46 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:23.847 09:59:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:23.847 09:59:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.847 09:59:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.847 ************************************ 00:27:23.847 START TEST nvmf_target_core_interrupt_mode 00:27:23.847 ************************************ 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:23.847 * Looking for test storage... 00:27:23.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1703 -- # lcov --version 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:27:23.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.847 --rc genhtml_branch_coverage=1 00:27:23.847 --rc genhtml_function_coverage=1 00:27:23.847 --rc genhtml_legend=1 00:27:23.847 --rc geninfo_all_blocks=1 00:27:23.847 --rc geninfo_unexecuted_blocks=1 00:27:23.847 00:27:23.847 ' 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:27:23.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.847 --rc genhtml_branch_coverage=1 00:27:23.847 --rc genhtml_function_coverage=1 00:27:23.847 --rc genhtml_legend=1 00:27:23.847 --rc geninfo_all_blocks=1 00:27:23.847 --rc geninfo_unexecuted_blocks=1 00:27:23.847 00:27:23.847 ' 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:27:23.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.847 --rc genhtml_branch_coverage=1 00:27:23.847 --rc genhtml_function_coverage=1 00:27:23.847 --rc genhtml_legend=1 00:27:23.847 --rc geninfo_all_blocks=1 00:27:23.847 --rc geninfo_unexecuted_blocks=1 00:27:23.847 00:27:23.847 ' 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:27:23.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.847 --rc genhtml_branch_coverage=1 00:27:23.847 --rc genhtml_function_coverage=1 00:27:23.847 --rc genhtml_legend=1 00:27:23.847 --rc geninfo_all_blocks=1 00:27:23.847 --rc geninfo_unexecuted_blocks=1 00:27:23.847 00:27:23.847 ' 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:23.847 09:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.847 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:23.848 ************************************ 00:27:23.848 START TEST nvmf_abort 00:27:23.848 ************************************ 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:23.848 * Looking for test storage... 00:27:23.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1703 -- # lcov --version 00:27:23.848 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:27:24.108 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:27:24.108 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.108 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.108 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.108 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.108 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.108 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.108 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:27:24.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.109 --rc genhtml_branch_coverage=1 00:27:24.109 --rc genhtml_function_coverage=1 00:27:24.109 --rc genhtml_legend=1 00:27:24.109 --rc geninfo_all_blocks=1 00:27:24.109 --rc geninfo_unexecuted_blocks=1 00:27:24.109 00:27:24.109 ' 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:27:24.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.109 --rc genhtml_branch_coverage=1 00:27:24.109 --rc genhtml_function_coverage=1 00:27:24.109 --rc genhtml_legend=1 00:27:24.109 --rc geninfo_all_blocks=1 00:27:24.109 --rc geninfo_unexecuted_blocks=1 00:27:24.109 00:27:24.109 ' 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:27:24.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.109 --rc genhtml_branch_coverage=1 00:27:24.109 --rc genhtml_function_coverage=1 00:27:24.109 --rc genhtml_legend=1 00:27:24.109 --rc geninfo_all_blocks=1 00:27:24.109 --rc geninfo_unexecuted_blocks=1 00:27:24.109 00:27:24.109 ' 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:27:24.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.109 --rc genhtml_branch_coverage=1 00:27:24.109 --rc genhtml_function_coverage=1 00:27:24.109 --rc genhtml_legend=1 00:27:24.109 --rc geninfo_all_blocks=1 00:27:24.109 --rc geninfo_unexecuted_blocks=1 00:27:24.109 00:27:24.109 ' 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.109 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.110 09:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:30.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:30.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.684 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:30.684 Found net devices under 0000:86:00.0: cvl_0_0 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:30.685 Found net devices under 0000:86:00.1: cvl_0_1 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:30.685 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:30.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:27:30.685 00:27:30.685 --- 10.0.0.2 ping statistics --- 00:27:30.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.685 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:27:30.685 00:27:30.685 --- 10.0.0.1 ping statistics --- 00:27:30.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.685 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3077610 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3077610 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3077610 ']' 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.685 [2024-11-20 09:59:53.181127] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:30.685 [2024-11-20 09:59:53.182124] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:27:30.685 [2024-11-20 09:59:53.182163] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.685 [2024-11-20 09:59:53.261095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:30.685 [2024-11-20 09:59:53.303569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.685 [2024-11-20 09:59:53.303605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.685 [2024-11-20 09:59:53.303612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.685 [2024-11-20 09:59:53.303618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.685 [2024-11-20 09:59:53.303623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.685 [2024-11-20 09:59:53.307964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.685 [2024-11-20 09:59:53.308052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.685 [2024-11-20 09:59:53.308053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.685 [2024-11-20 09:59:53.375479] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:30.685 [2024-11-20 09:59:53.376258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:30.685 [2024-11-20 09:59:53.376552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:30.685 [2024-11-20 09:59:53.376702] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.685 [2024-11-20 09:59:53.456778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:30.685 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.686 Malloc0 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.686 Delay0 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.686 [2024-11-20 09:59:53.548754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.686 09:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:30.686 [2024-11-20 09:59:53.721103] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:32.589 Initializing NVMe Controllers 00:27:32.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:32.589 controller IO queue size 128 less than required 00:27:32.589 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:32.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:32.589 Initialization complete. Launching workers. 00:27:32.589 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36799 00:27:32.589 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36860, failed to submit 66 00:27:32.589 success 36799, unsuccessful 61, failed 0 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:32.589 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:32.589 rmmod nvme_tcp 00:27:32.589 rmmod nvme_fabrics 00:27:32.848 rmmod nvme_keyring 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3077610 ']' 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3077610 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3077610 ']' 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3077610 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.848 09:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3077610 00:27:32.848 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:32.848 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:32.848 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3077610' 00:27:32.848 killing process with pid 3077610 00:27:32.848 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3077610 00:27:32.848 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3077610 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.106 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.107 09:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.010 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:35.010 00:27:35.010 real 0m11.203s 00:27:35.010 user 0m10.669s 00:27:35.010 sys 0m5.826s 00:27:35.010 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.010 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.010 ************************************ 00:27:35.010 END TEST nvmf_abort 00:27:35.010 ************************************ 00:27:35.010 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:35.010 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:35.010 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.010 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:35.270 ************************************ 00:27:35.270 START TEST nvmf_ns_hotplug_stress 00:27:35.270 ************************************ 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:35.270 * Looking for test storage... 00:27:35.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # lcov --version 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:27:35.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.270 --rc genhtml_branch_coverage=1 00:27:35.270 --rc genhtml_function_coverage=1 00:27:35.270 --rc genhtml_legend=1 00:27:35.270 --rc geninfo_all_blocks=1 00:27:35.270 --rc geninfo_unexecuted_blocks=1 00:27:35.270 00:27:35.270 ' 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:27:35.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.270 --rc genhtml_branch_coverage=1 00:27:35.270 --rc genhtml_function_coverage=1 00:27:35.270 --rc genhtml_legend=1 00:27:35.270 --rc geninfo_all_blocks=1 00:27:35.270 --rc geninfo_unexecuted_blocks=1 00:27:35.270 00:27:35.270 ' 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:27:35.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.270 --rc genhtml_branch_coverage=1 00:27:35.270 --rc genhtml_function_coverage=1 00:27:35.270 --rc genhtml_legend=1 00:27:35.270 --rc geninfo_all_blocks=1 00:27:35.270 --rc geninfo_unexecuted_blocks=1 00:27:35.270 00:27:35.270 ' 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:27:35.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.270 --rc genhtml_branch_coverage=1 00:27:35.270 --rc genhtml_function_coverage=1 00:27:35.270 --rc genhtml_legend=1 00:27:35.270 --rc geninfo_all_blocks=1 00:27:35.270 --rc geninfo_unexecuted_blocks=1 00:27:35.270 00:27:35.270 ' 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.270 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.271 09:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:41.851 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:41.851 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:41.851 Found net devices under 0000:86:00.0: cvl_0_0 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:41.851 Found net devices under 0000:86:00.1: cvl_0_1 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.851 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:27:41.852 00:27:41.852 --- 10.0.0.2 ping statistics --- 00:27:41.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.852 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:27:41.852 00:27:41.852 --- 10.0.0.1 ping statistics --- 00:27:41.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.852 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3081730 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3081730 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3081730 ']' 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.852 10:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:41.852 [2024-11-20 10:00:04.506384] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:41.852 [2024-11-20 10:00:04.507366] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:27:41.852 [2024-11-20 10:00:04.507406] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.852 [2024-11-20 10:00:04.589974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:41.852 [2024-11-20 10:00:04.631180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.852 [2024-11-20 10:00:04.631217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.852 [2024-11-20 10:00:04.631224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.852 [2024-11-20 10:00:04.631231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.852 [2024-11-20 10:00:04.631236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.852 [2024-11-20 10:00:04.632567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.852 [2024-11-20 10:00:04.632672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.852 [2024-11-20 10:00:04.632673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.852 [2024-11-20 10:00:04.701021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:41.852 [2024-11-20 10:00:04.701916] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:41.852 [2024-11-20 10:00:04.702252] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:41.852 [2024-11-20 10:00:04.702317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:42.112 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.112 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:42.112 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:42.112 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.112 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:42.112 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.112 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:42.112 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:42.372 [2024-11-20 10:00:05.561422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.372 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:42.631 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.631 [2024-11-20 10:00:05.957837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.889 10:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:42.889 10:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:43.148 Malloc0 00:27:43.148 10:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:43.407 Delay0 00:27:43.407 10:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.666 10:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:43.666 NULL1 00:27:43.666 10:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:43.925 10:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:43.925 10:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3082139 00:27:43.925 10:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:43.925 10:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.183 10:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.441 10:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:44.441 10:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:44.701 true 00:27:44.701 10:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:44.701 10:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.701 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.959 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:44.959 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:45.218 true 00:27:45.218 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:45.218 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.154 Read completed with error (sct=0, sc=11) 00:27:46.412 10:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.412 10:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:46.412 10:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:46.671 true 00:27:46.671 10:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:46.671 10:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.930 10:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.189 10:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:47.189 10:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:47.189 true 00:27:47.189 10:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:47.189 10:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.567 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:48.567 10:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.567 10:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:48.567 10:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:48.826 true 00:27:48.826 10:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:48.826 10:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.826 10:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.086 10:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:49.086 10:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:49.344 true 00:27:49.344 10:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:49.344 10:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.722 10:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.722 10:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:50.722 10:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:50.981 true 00:27:50.981 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:50.981 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.915 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.915 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:51.915 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:52.173 true 00:27:52.173 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:52.173 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:52.173 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.431 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:52.431 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:52.689 true 00:27:52.689 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:52.689 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.062 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:54.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:54.321 true 00:27:54.321 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:54.321 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.257 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:55.257 10:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.257 10:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:55.257 10:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:55.515 true 00:27:55.515 10:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:55.515 10:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.774 10:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:55.774 10:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:55.774 10:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:56.032 true 00:27:56.032 10:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:56.032 10:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.410 10:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.410 10:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:57.410 10:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:57.669 true 00:27:57.669 10:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:57.669 10:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.604 10:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.604 10:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:58.604 10:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:58.862 true 00:27:58.862 10:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:58.862 10:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.121 10:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.381 10:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:59.381 10:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:59.381 true 00:27:59.381 10:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:27:59.381 10:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.760 10:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.760 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:00.760 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:01.019 true 00:28:01.019 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:01.019 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.957 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.957 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:01.957 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:02.218 true 00:28:02.218 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:02.218 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.477 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.736 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:02.736 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:02.736 true 00:28:02.736 10:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:02.736 10:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.114 10:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.114 10:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:04.114 10:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:04.373 true 00:28:04.373 10:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:04.373 10:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.311 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.311 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:05.311 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:05.570 true 00:28:05.570 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:05.570 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.828 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.828 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:05.828 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:06.086 true 00:28:06.086 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:06.086 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.461 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.461 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:07.461 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:07.719 true 00:28:07.719 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:07.719 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.689 10:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.689 10:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:08.689 10:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:08.997 true 00:28:08.997 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:08.997 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.997 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.256 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:09.256 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:09.515 true 00:28:09.515 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:09.515 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.451 10:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.710 10:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:10.710 10:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:10.969 true 00:28:10.969 10:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:10.969 10:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.229 10:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.230 10:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:11.230 10:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:11.489 true 00:28:11.489 10:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:11.489 10:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.865 10:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.865 10:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:12.865 10:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:13.124 true 00:28:13.124 10:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:13.124 10:00:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.953 10:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.953 10:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:13.953 10:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:14.212 true 00:28:14.212 10:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:14.213 10:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.213 Initializing NVMe Controllers 00:28:14.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:14.213 Controller IO queue size 128, less than required. 00:28:14.213 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.213 Controller IO queue size 128, less than required. 00:28:14.213 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:14.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:14.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:14.213 Initialization complete. Launching workers. 00:28:14.213 ======================================================== 00:28:14.213 Latency(us) 00:28:14.213 Device Information : IOPS MiB/s Average min max 00:28:14.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1839.21 0.90 45288.77 2506.79 1013054.51 00:28:14.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16904.76 8.25 7553.18 1607.53 380315.73 00:28:14.213 ======================================================== 00:28:14.213 Total : 18743.97 9.15 11255.91 1607.53 1013054.51 00:28:14.213 00:28:14.472 10:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.731 10:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:28:14.731 10:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:28:14.731 true 00:28:14.731 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3082139 00:28:14.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3082139) - No such process 00:28:14.731 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3082139 00:28:14.731 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.990 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:15.249 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:15.249 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:15.249 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:15.249 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.249 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:15.508 null0 00:28:15.508 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.508 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.508 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:15.508 null1 00:28:15.767 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.767 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.767 10:00:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:15.767 null2 00:28:15.767 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:15.767 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:15.767 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:16.027 null3 00:28:16.027 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:16.027 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:16.027 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:16.287 null4 00:28:16.287 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:16.287 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:16.287 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:16.287 null5 00:28:16.287 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:16.287 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:16.287 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:16.546 null6 00:28:16.546 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:16.546 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:16.546 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:16.806 null7 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.806 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3087798 3087801 3087803 3087806 3087809 3087813 3087816 3087819 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:16.807 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.066 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.326 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:17.585 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:17.844 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.844 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:17.844 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.844 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:17.844 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:17.844 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:17.844 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:17.844 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.104 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.363 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.363 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.364 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.623 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.623 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.623 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.623 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.623 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.623 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.623 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.623 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.883 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.142 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.401 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.402 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.661 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.662 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.920 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.920 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.920 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.920 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.920 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.920 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.920 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.920 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.179 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.437 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.695 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.695 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.695 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.695 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.695 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.695 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.695 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.695 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.954 rmmod nvme_tcp 00:28:20.954 rmmod nvme_fabrics 00:28:20.954 rmmod nvme_keyring 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3081730 ']' 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3081730 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3081730 ']' 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3081730 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.954 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3081730 00:28:21.212 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3081730' 00:28:21.213 killing process with pid 3081730 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3081730 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3081730 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.213 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:23.749 00:28:23.749 real 0m48.217s 00:28:23.749 user 2m59.399s 00:28:23.749 sys 0m20.400s 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:23.749 ************************************ 00:28:23.749 END TEST nvmf_ns_hotplug_stress 00:28:23.749 ************************************ 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:23.749 ************************************ 00:28:23.749 START TEST nvmf_delete_subsystem 00:28:23.749 ************************************ 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:23.749 * Looking for test storage... 00:28:23.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # lcov --version 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.749 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:28:23.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.749 --rc genhtml_branch_coverage=1 00:28:23.749 --rc genhtml_function_coverage=1 00:28:23.749 --rc genhtml_legend=1 00:28:23.749 --rc geninfo_all_blocks=1 00:28:23.749 --rc geninfo_unexecuted_blocks=1 00:28:23.749 00:28:23.749 ' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:28:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.750 --rc genhtml_branch_coverage=1 00:28:23.750 --rc genhtml_function_coverage=1 00:28:23.750 --rc genhtml_legend=1 00:28:23.750 --rc geninfo_all_blocks=1 00:28:23.750 --rc geninfo_unexecuted_blocks=1 00:28:23.750 00:28:23.750 ' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:28:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.750 --rc genhtml_branch_coverage=1 00:28:23.750 --rc genhtml_function_coverage=1 00:28:23.750 --rc genhtml_legend=1 00:28:23.750 --rc geninfo_all_blocks=1 00:28:23.750 --rc geninfo_unexecuted_blocks=1 00:28:23.750 00:28:23.750 ' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:28:23.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.750 --rc genhtml_branch_coverage=1 00:28:23.750 --rc genhtml_function_coverage=1 00:28:23.750 --rc genhtml_legend=1 00:28:23.750 --rc geninfo_all_blocks=1 00:28:23.750 --rc geninfo_unexecuted_blocks=1 00:28:23.750 00:28:23.750 ' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:23.750 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:30.321 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:30.322 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:30.322 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:30.322 Found net devices under 0000:86:00.0: cvl_0_0 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:30.322 Found net devices under 0000:86:00.1: cvl_0_1 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:28:30.322 00:28:30.322 --- 10.0.0.2 ping statistics --- 00:28:30.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.322 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:28:30.322 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:28:30.323 00:28:30.323 --- 10.0.0.1 ping statistics --- 00:28:30.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.323 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3092112 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3092112 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3092112 ']' 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.323 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.323 [2024-11-20 10:00:52.842542] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:30.323 [2024-11-20 10:00:52.843496] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:28:30.323 [2024-11-20 10:00:52.843532] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.323 [2024-11-20 10:00:52.924594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:30.323 [2024-11-20 10:00:52.966045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.323 [2024-11-20 10:00:52.966084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.323 [2024-11-20 10:00:52.966091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.323 [2024-11-20 10:00:52.966097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.323 [2024-11-20 10:00:52.966102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.323 [2024-11-20 10:00:52.967247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.323 [2024-11-20 10:00:52.967250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.323 [2024-11-20 10:00:53.034317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:30.323 [2024-11-20 10:00:53.034938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:30.323 [2024-11-20 10:00:53.035152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.323 [2024-11-20 10:00:53.104056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.323 [2024-11-20 10:00:53.132388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.323 NULL1 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.323 Delay0 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3092281 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:30.323 10:00:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:30.323 [2024-11-20 10:00:53.244444] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:32.228 10:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:32.228 10:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.228 10:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 starting I/O failed: -6 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 starting I/O failed: -6 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 starting I/O failed: -6 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 starting I/O failed: -6 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 starting I/O failed: -6 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.228 Read completed with error (sct=0, sc=8) 00:28:32.228 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 [2024-11-20 10:00:55.277860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1709860 is same with the state(6) to be set 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 starting I/O failed: -6 00:28:32.229 [2024-11-20 10:00:55.281293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe89c000c40 is same with the state(6) to be set 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Write completed with error (sct=0, sc=8) 00:28:32.229 Read completed with error (sct=0, sc=8) 00:28:33.170 [2024-11-20 10:00:56.258105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a9a0 is same with the state(6) to be set 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 [2024-11-20 10:00:56.281053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17092c0 is same with the state(6) to be set 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 [2024-11-20 10:00:56.281436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1709680 is same with the state(6) to be set 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 [2024-11-20 10:00:56.283913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe89c00d350 is same with the state(6) to be set 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 Write completed with error (sct=0, sc=8) 00:28:33.170 Read completed with error (sct=0, sc=8) 00:28:33.170 [2024-11-20 10:00:56.284379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe89c00d7e0 is same with the state(6) to be set 00:28:33.170 Initializing NVMe Controllers 00:28:33.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.170 Controller IO queue size 128, less than required. 00:28:33.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:33.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:33.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:33.170 Initialization complete. Launching workers. 00:28:33.170 ======================================================== 00:28:33.170 Latency(us) 00:28:33.170 Device Information : IOPS MiB/s Average min max 00:28:33.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.17 0.08 888214.59 323.21 1006446.76 00:28:33.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.22 0.08 912268.66 243.34 1010326.37 00:28:33.170 ======================================================== 00:28:33.170 Total : 335.39 0.16 899849.05 243.34 1010326.37 00:28:33.170 00:28:33.170 [2024-11-20 10:00:56.284946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170a9a0 (9): Bad file descriptor 00:28:33.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:33.171 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.171 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:33.171 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3092281 00:28:33.171 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3092281 00:28:33.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3092281) - No such process 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3092281 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3092281 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3092281 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:33.740 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.741 [2024-11-20 10:00:56.816303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3092814 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3092814 00:28:33.741 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:33.741 [2024-11-20 10:00:56.902381] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:34.309 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:34.309 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3092814 00:28:34.309 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:34.568 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:34.568 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3092814 00:28:34.568 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:35.136 10:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:35.136 10:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3092814 00:28:35.136 10:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:35.708 10:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:35.708 10:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3092814 00:28:35.708 10:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:36.278 10:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:36.278 10:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3092814 00:28:36.278 10:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:36.536 10:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:36.536 10:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3092814 00:28:36.536 10:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:36.796 Initializing NVMe Controllers 00:28:36.796 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:36.796 Controller IO queue size 128, less than required. 00:28:36.796 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:36.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:36.796 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:36.796 Initialization complete. Launching workers. 00:28:36.796 ======================================================== 00:28:36.796 Latency(us) 00:28:36.796 Device Information : IOPS MiB/s Average min max 00:28:36.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002286.89 1000134.00 1040990.81 00:28:36.796 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004961.31 1000148.84 1042221.76 00:28:36.796 ======================================================== 00:28:36.796 Total : 256.00 0.12 1003624.10 1000134.00 1042221.76 00:28:36.796 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3092814 00:28:37.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3092814) - No such process 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3092814 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.056 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.056 rmmod nvme_tcp 00:28:37.056 rmmod nvme_fabrics 00:28:37.315 rmmod nvme_keyring 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3092112 ']' 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3092112 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3092112 ']' 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3092112 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3092112 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3092112' 00:28:37.315 killing process with pid 3092112 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3092112 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3092112 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.315 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:37.575 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:37.575 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.575 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.575 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.575 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.575 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.575 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.575 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.481 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.481 00:28:39.481 real 0m16.089s 00:28:39.481 user 0m25.903s 00:28:39.481 sys 0m6.108s 00:28:39.481 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.481 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.481 ************************************ 00:28:39.481 END TEST nvmf_delete_subsystem 00:28:39.481 ************************************ 00:28:39.481 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:39.481 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:39.481 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.481 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:39.481 ************************************ 00:28:39.481 START TEST nvmf_host_management 00:28:39.481 ************************************ 00:28:39.481 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:39.741 * Looking for test storage... 00:28:39.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1703 -- # lcov --version 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.741 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:28:39.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.742 --rc genhtml_branch_coverage=1 00:28:39.742 --rc genhtml_function_coverage=1 00:28:39.742 --rc genhtml_legend=1 00:28:39.742 --rc geninfo_all_blocks=1 00:28:39.742 --rc geninfo_unexecuted_blocks=1 00:28:39.742 00:28:39.742 ' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:28:39.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.742 --rc genhtml_branch_coverage=1 00:28:39.742 --rc genhtml_function_coverage=1 00:28:39.742 --rc genhtml_legend=1 00:28:39.742 --rc geninfo_all_blocks=1 00:28:39.742 --rc geninfo_unexecuted_blocks=1 00:28:39.742 00:28:39.742 ' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:28:39.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.742 --rc genhtml_branch_coverage=1 00:28:39.742 --rc genhtml_function_coverage=1 00:28:39.742 --rc genhtml_legend=1 00:28:39.742 --rc geninfo_all_blocks=1 00:28:39.742 --rc geninfo_unexecuted_blocks=1 00:28:39.742 00:28:39.742 ' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:28:39.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.742 --rc genhtml_branch_coverage=1 00:28:39.742 --rc genhtml_function_coverage=1 00:28:39.742 --rc genhtml_legend=1 00:28:39.742 --rc geninfo_all_blocks=1 00:28:39.742 --rc geninfo_unexecuted_blocks=1 00:28:39.742 00:28:39.742 ' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:39.742 10:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:39.742 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.743 10:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.313 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.313 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:46.313 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:46.313 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:46.313 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:46.313 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:46.313 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:46.313 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:46.313 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:46.314 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:46.314 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:46.314 Found net devices under 0000:86:00.0: cvl_0_0 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:46.314 Found net devices under 0000:86:00.1: cvl_0_1 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:46.314 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:46.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:28:46.314 00:28:46.314 --- 10.0.0.2 ping statistics --- 00:28:46.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.315 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:28:46.315 00:28:46.315 --- 10.0.0.1 ping statistics --- 00:28:46.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.315 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3096823 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3096823 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3096823 ']' 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.315 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.315 [2024-11-20 10:01:08.962454] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:46.315 [2024-11-20 10:01:08.963407] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:28:46.315 [2024-11-20 10:01:08.963442] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.315 [2024-11-20 10:01:09.042094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.315 [2024-11-20 10:01:09.085830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.315 [2024-11-20 10:01:09.085870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.315 [2024-11-20 10:01:09.085878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.315 [2024-11-20 10:01:09.085884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.315 [2024-11-20 10:01:09.085889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.315 [2024-11-20 10:01:09.087548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.315 [2024-11-20 10:01:09.087661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.315 [2024-11-20 10:01:09.087766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.315 [2024-11-20 10:01:09.087768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:46.315 [2024-11-20 10:01:09.156427] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:46.315 [2024-11-20 10:01:09.157306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:46.315 [2024-11-20 10:01:09.157421] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:46.315 [2024-11-20 10:01:09.157728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:46.315 [2024-11-20 10:01:09.157795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.315 [2024-11-20 10:01:09.220451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.315 Malloc0 00:28:46.315 [2024-11-20 10:01:09.312767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3097065 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3097065 /var/tmp/bdevperf.sock 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3097065 ']' 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:46.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:46.315 { 00:28:46.315 "params": { 00:28:46.315 "name": "Nvme$subsystem", 00:28:46.315 "trtype": "$TEST_TRANSPORT", 00:28:46.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.315 "adrfam": "ipv4", 00:28:46.315 "trsvcid": "$NVMF_PORT", 00:28:46.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.315 "hdgst": ${hdgst:-false}, 00:28:46.315 "ddgst": ${ddgst:-false} 00:28:46.315 }, 00:28:46.315 "method": "bdev_nvme_attach_controller" 00:28:46.315 } 00:28:46.315 EOF 00:28:46.315 )") 00:28:46.315 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:46.316 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:46.316 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:46.316 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:46.316 "params": { 00:28:46.316 "name": "Nvme0", 00:28:46.316 "trtype": "tcp", 00:28:46.316 "traddr": "10.0.0.2", 00:28:46.316 "adrfam": "ipv4", 00:28:46.316 "trsvcid": "4420", 00:28:46.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:46.316 "hdgst": false, 00:28:46.316 "ddgst": false 00:28:46.316 }, 00:28:46.316 "method": "bdev_nvme_attach_controller" 00:28:46.316 }' 00:28:46.316 [2024-11-20 10:01:09.413326] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:28:46.316 [2024-11-20 10:01:09.413374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097065 ] 00:28:46.316 [2024-11-20 10:01:09.491963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.316 [2024-11-20 10:01:09.533193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.574 Running I/O for 10 seconds... 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=846 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 846 -ge 100 ']' 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:47.143 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:47.144 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.144 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:47.144 [2024-11-20 10:01:10.344190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 [2024-11-20 10:01:10.344347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd8ec0 is same with the state(6) to be set 00:28:47.144 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.144 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:47.144 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.144 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:47.144 [2024-11-20 10:01:10.350443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.144 [2024-11-20 10:01:10.350887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.144 [2024-11-20 10:01:10.350896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.350902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.350910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.350917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.350925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.350931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.350939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.350953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.350962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.350968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.350976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.350983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.350991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.350998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.145 [2024-11-20 10:01:10.351450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.145 [2024-11-20 10:01:10.351558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.145 [2024-11-20 10:01:10.351572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.145 [2024-11-20 10:01:10.351580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.146 [2024-11-20 10:01:10.351587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.146 [2024-11-20 10:01:10.351594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.146 [2024-11-20 10:01:10.351601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.146 [2024-11-20 10:01:10.351607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1984500 is same with the state(6) to be set 00:28:47.146 [2024-11-20 10:01:10.352507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:47.146 task offset: 122880 on job bdev=Nvme0n1 fails 00:28:47.146 00:28:47.146 Latency(us) 00:28:47.146 [2024-11-20T09:01:10.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.146 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:47.146 Job: Nvme0n1 ended in about 0.50 seconds with error 00:28:47.146 Verification LBA range: start 0x0 length 0x400 00:28:47.146 Nvme0n1 : 0.50 1924.43 120.28 128.30 0.00 30447.72 1752.38 27810.06 00:28:47.146 [2024-11-20T09:01:10.478Z] =================================================================================================================== 00:28:47.146 [2024-11-20T09:01:10.478Z] Total : 1924.43 120.28 128.30 0.00 30447.72 1752.38 27810.06 00:28:47.146 [2024-11-20 10:01:10.354897] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:47.146 [2024-11-20 10:01:10.354917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1984500 (9): Bad file descriptor 00:28:47.146 [2024-11-20 10:01:10.355906] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:47.146 [2024-11-20 10:01:10.355993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:47.146 [2024-11-20 10:01:10.356016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.146 [2024-11-20 10:01:10.356033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:47.146 [2024-11-20 10:01:10.356040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:47.146 [2024-11-20 10:01:10.356047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.146 [2024-11-20 10:01:10.356054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1984500 00:28:47.146 [2024-11-20 10:01:10.356073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1984500 (9): Bad file descriptor 00:28:47.146 [2024-11-20 10:01:10.356085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:47.146 [2024-11-20 10:01:10.356092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:47.146 [2024-11-20 10:01:10.356100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:47.146 [2024-11-20 10:01:10.356108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:47.146 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.146 10:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:48.083 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3097065 00:28:48.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3097065) - No such process 00:28:48.083 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:48.083 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:48.084 { 00:28:48.084 "params": { 00:28:48.084 "name": "Nvme$subsystem", 00:28:48.084 "trtype": "$TEST_TRANSPORT", 00:28:48.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.084 "adrfam": "ipv4", 00:28:48.084 "trsvcid": "$NVMF_PORT", 00:28:48.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.084 "hdgst": ${hdgst:-false}, 00:28:48.084 "ddgst": ${ddgst:-false} 00:28:48.084 }, 00:28:48.084 "method": "bdev_nvme_attach_controller" 00:28:48.084 } 00:28:48.084 EOF 00:28:48.084 )") 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:48.084 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:48.084 "params": { 00:28:48.084 "name": "Nvme0", 00:28:48.084 "trtype": "tcp", 00:28:48.084 "traddr": "10.0.0.2", 00:28:48.084 "adrfam": "ipv4", 00:28:48.084 "trsvcid": "4420", 00:28:48.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.084 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:48.084 "hdgst": false, 00:28:48.084 "ddgst": false 00:28:48.084 }, 00:28:48.084 "method": "bdev_nvme_attach_controller" 00:28:48.084 }' 00:28:48.342 [2024-11-20 10:01:11.416696] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:28:48.342 [2024-11-20 10:01:11.416744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097318 ] 00:28:48.342 [2024-11-20 10:01:11.493468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.342 [2024-11-20 10:01:11.532732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.602 Running I/O for 1 seconds... 00:28:49.540 1984.00 IOPS, 124.00 MiB/s 00:28:49.540 Latency(us) 00:28:49.540 [2024-11-20T09:01:12.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.540 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:49.540 Verification LBA range: start 0x0 length 0x400 00:28:49.540 Nvme0n1 : 1.02 2009.86 125.62 0.00 0.00 31340.32 4445.05 27924.03 00:28:49.540 [2024-11-20T09:01:12.872Z] =================================================================================================================== 00:28:49.540 [2024-11-20T09:01:12.872Z] Total : 2009.86 125.62 0.00 0.00 31340.32 4445.05 27924.03 00:28:49.799 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:49.799 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:49.799 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:49.799 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.799 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:49.799 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.799 10:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.799 rmmod nvme_tcp 00:28:49.799 rmmod nvme_fabrics 00:28:49.799 rmmod nvme_keyring 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3096823 ']' 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3096823 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3096823 ']' 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3096823 00:28:49.799 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:49.800 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.800 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3096823 00:28:49.800 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:49.800 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:49.800 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3096823' 00:28:49.800 killing process with pid 3096823 00:28:49.800 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3096823 00:28:49.800 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3096823 00:28:50.058 [2024-11-20 10:01:13.273938] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:50.058 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.058 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.058 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.059 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:50.059 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:50.059 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.059 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.059 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.059 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.059 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.059 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.059 10:01:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:52.594 00:28:52.594 real 0m12.584s 00:28:52.594 user 0m19.147s 00:28:52.594 sys 0m6.446s 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:52.594 ************************************ 00:28:52.594 END TEST nvmf_host_management 00:28:52.594 ************************************ 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:52.594 ************************************ 00:28:52.594 START TEST nvmf_lvol 00:28:52.594 ************************************ 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:52.594 * Looking for test storage... 00:28:52.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1703 -- # lcov --version 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.594 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:28:52.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.595 --rc genhtml_branch_coverage=1 00:28:52.595 --rc genhtml_function_coverage=1 00:28:52.595 --rc genhtml_legend=1 00:28:52.595 --rc geninfo_all_blocks=1 00:28:52.595 --rc geninfo_unexecuted_blocks=1 00:28:52.595 00:28:52.595 ' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:28:52.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.595 --rc genhtml_branch_coverage=1 00:28:52.595 --rc genhtml_function_coverage=1 00:28:52.595 --rc genhtml_legend=1 00:28:52.595 --rc geninfo_all_blocks=1 00:28:52.595 --rc geninfo_unexecuted_blocks=1 00:28:52.595 00:28:52.595 ' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:28:52.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.595 --rc genhtml_branch_coverage=1 00:28:52.595 --rc genhtml_function_coverage=1 00:28:52.595 --rc genhtml_legend=1 00:28:52.595 --rc geninfo_all_blocks=1 00:28:52.595 --rc geninfo_unexecuted_blocks=1 00:28:52.595 00:28:52.595 ' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:28:52.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.595 --rc genhtml_branch_coverage=1 00:28:52.595 --rc genhtml_function_coverage=1 00:28:52.595 --rc genhtml_legend=1 00:28:52.595 --rc geninfo_all_blocks=1 00:28:52.595 --rc geninfo_unexecuted_blocks=1 00:28:52.595 00:28:52.595 ' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.595 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.596 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.596 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.596 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.596 10:01:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:59.169 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:59.169 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:59.169 Found net devices under 0000:86:00.0: cvl_0_0 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:59.169 Found net devices under 0000:86:00.1: cvl_0_1 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.169 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:28:59.170 00:28:59.170 --- 10.0.0.2 ping statistics --- 00:28:59.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.170 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:28:59.170 00:28:59.170 --- 10.0.0.1 ping statistics --- 00:28:59.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.170 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3101076 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3101076 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3101076 ']' 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 [2024-11-20 10:01:21.659671] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:59.170 [2024-11-20 10:01:21.660561] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:28:59.170 [2024-11-20 10:01:21.660591] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.170 [2024-11-20 10:01:21.723006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.170 [2024-11-20 10:01:21.766030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.170 [2024-11-20 10:01:21.766067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.170 [2024-11-20 10:01:21.766075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.170 [2024-11-20 10:01:21.766081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.170 [2024-11-20 10:01:21.766086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.170 [2024-11-20 10:01:21.770965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.170 [2024-11-20 10:01:21.771000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.170 [2024-11-20 10:01:21.770999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.170 [2024-11-20 10:01:21.838006] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:59.170 [2024-11-20 10:01:21.838036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:59.170 [2024-11-20 10:01:21.838553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:59.170 [2024-11-20 10:01:21.838787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.170 10:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:59.170 [2024-11-20 10:01:22.079752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.170 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:59.170 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:59.170 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:59.430 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:59.430 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:59.690 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:59.690 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=18213044-c094-417d-af35-7d1dc6f1168b 00:28:59.690 10:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 18213044-c094-417d-af35-7d1dc6f1168b lvol 20 00:28:59.949 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7a3c3911-a324-4938-b322-84444c042c4c 00:28:59.949 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:00.208 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7a3c3911-a324-4938-b322-84444c042c4c 00:29:00.468 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:00.468 [2024-11-20 10:01:23.719709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.468 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:00.727 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3101557 00:29:00.727 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:00.727 10:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:01.665 10:01:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7a3c3911-a324-4938-b322-84444c042c4c MY_SNAPSHOT 00:29:01.971 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=912ef729-0d49-43f3-99d2-bb998a54f576 00:29:01.971 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7a3c3911-a324-4938-b322-84444c042c4c 30 00:29:02.245 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 912ef729-0d49-43f3-99d2-bb998a54f576 MY_CLONE 00:29:02.538 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2909259b-1508-46a8-a69c-d802430305f5 00:29:02.538 10:01:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2909259b-1508-46a8-a69c-d802430305f5 00:29:03.122 10:01:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3101557 00:29:11.245 Initializing NVMe Controllers 00:29:11.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:11.245 Controller IO queue size 128, less than required. 00:29:11.245 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:11.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:11.245 Initialization complete. Launching workers. 00:29:11.245 ======================================================== 00:29:11.245 Latency(us) 00:29:11.245 Device Information : IOPS MiB/s Average min max 00:29:11.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11973.00 46.77 10695.36 1578.28 52387.80 00:29:11.245 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11891.10 46.45 10770.56 3625.81 42000.14 00:29:11.245 ======================================================== 00:29:11.245 Total : 23864.10 93.22 10732.83 1578.28 52387.80 00:29:11.245 00:29:11.245 10:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:11.506 10:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7a3c3911-a324-4938-b322-84444c042c4c 00:29:11.765 10:01:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 18213044-c094-417d-af35-7d1dc6f1168b 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:11.765 rmmod nvme_tcp 00:29:11.765 rmmod nvme_fabrics 00:29:11.765 rmmod nvme_keyring 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3101076 ']' 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3101076 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3101076 ']' 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3101076 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:11.765 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.024 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3101076 00:29:12.024 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:12.024 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:12.024 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3101076' 00:29:12.024 killing process with pid 3101076 00:29:12.024 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3101076 00:29:12.024 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3101076 00:29:12.025 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:12.025 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:12.025 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:12.025 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:12.025 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:12.025 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:12.025 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:12.284 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.284 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.284 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.284 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.284 10:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.190 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.190 00:29:14.190 real 0m21.973s 00:29:14.190 user 0m55.928s 00:29:14.190 sys 0m9.959s 00:29:14.190 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.190 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:14.190 ************************************ 00:29:14.190 END TEST nvmf_lvol 00:29:14.190 ************************************ 00:29:14.190 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:14.190 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:14.190 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.190 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:14.190 ************************************ 00:29:14.190 START TEST nvmf_lvs_grow 00:29:14.190 ************************************ 00:29:14.190 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:14.450 * Looking for test storage... 00:29:14.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # lcov --version 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.450 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:29:14.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.451 --rc genhtml_branch_coverage=1 00:29:14.451 --rc genhtml_function_coverage=1 00:29:14.451 --rc genhtml_legend=1 00:29:14.451 --rc geninfo_all_blocks=1 00:29:14.451 --rc geninfo_unexecuted_blocks=1 00:29:14.451 00:29:14.451 ' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:29:14.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.451 --rc genhtml_branch_coverage=1 00:29:14.451 --rc genhtml_function_coverage=1 00:29:14.451 --rc genhtml_legend=1 00:29:14.451 --rc geninfo_all_blocks=1 00:29:14.451 --rc geninfo_unexecuted_blocks=1 00:29:14.451 00:29:14.451 ' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:29:14.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.451 --rc genhtml_branch_coverage=1 00:29:14.451 --rc genhtml_function_coverage=1 00:29:14.451 --rc genhtml_legend=1 00:29:14.451 --rc geninfo_all_blocks=1 00:29:14.451 --rc geninfo_unexecuted_blocks=1 00:29:14.451 00:29:14.451 ' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:29:14.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.451 --rc genhtml_branch_coverage=1 00:29:14.451 --rc genhtml_function_coverage=1 00:29:14.451 --rc genhtml_legend=1 00:29:14.451 --rc geninfo_all_blocks=1 00:29:14.451 --rc geninfo_unexecuted_blocks=1 00:29:14.451 00:29:14.451 ' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.451 10:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:21.022 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:21.022 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.022 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:21.023 Found net devices under 0000:86:00.0: cvl_0_0 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:21.023 Found net devices under 0000:86:00.1: cvl_0_1 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:29:21.023 00:29:21.023 --- 10.0.0.2 ping statistics --- 00:29:21.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.023 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:29:21.023 00:29:21.023 --- 10.0.0.1 ping statistics --- 00:29:21.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.023 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3106703 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3106703 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3106703 ']' 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.023 [2024-11-20 10:01:43.726506] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:21.023 [2024-11-20 10:01:43.727465] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:29:21.023 [2024-11-20 10:01:43.727499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.023 [2024-11-20 10:01:43.806390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.023 [2024-11-20 10:01:43.848083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.023 [2024-11-20 10:01:43.848119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.023 [2024-11-20 10:01:43.848126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.023 [2024-11-20 10:01:43.848132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.023 [2024-11-20 10:01:43.848138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.023 [2024-11-20 10:01:43.848705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.023 [2024-11-20 10:01:43.916021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:21.023 [2024-11-20 10:01:43.916253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.023 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.024 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.024 10:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:21.024 [2024-11-20 10:01:44.145457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.024 ************************************ 00:29:21.024 START TEST lvs_grow_clean 00:29:21.024 ************************************ 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:21.024 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:21.282 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:21.282 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:21.542 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:21.542 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:21.542 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:21.542 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:21.542 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:21.542 10:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 55f3c960-866b-4bcb-b78c-793d4c389c5f lvol 150 00:29:21.800 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9a13d330-423f-4d34-b290-23d3b71e4a5f 00:29:21.800 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:21.800 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:22.059 [2024-11-20 10:01:45.249101] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:22.059 [2024-11-20 10:01:45.249232] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:22.059 true 00:29:22.059 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:22.059 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:22.318 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:22.318 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:22.577 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9a13d330-423f-4d34-b290-23d3b71e4a5f 00:29:22.577 10:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:22.836 [2024-11-20 10:01:46.037575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.836 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3107204 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3107204 /var/tmp/bdevperf.sock 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3107204 ']' 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.096 10:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:23.096 [2024-11-20 10:01:46.286685] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:29:23.096 [2024-11-20 10:01:46.286734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107204 ] 00:29:23.096 [2024-11-20 10:01:46.363279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.096 [2024-11-20 10:01:46.405484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.033 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.033 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:24.034 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:24.293 Nvme0n1 00:29:24.293 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:24.293 [ 00:29:24.293 { 00:29:24.293 "name": "Nvme0n1", 00:29:24.293 "aliases": [ 00:29:24.293 "9a13d330-423f-4d34-b290-23d3b71e4a5f" 00:29:24.293 ], 00:29:24.293 "product_name": "NVMe disk", 00:29:24.293 "block_size": 4096, 00:29:24.293 "num_blocks": 38912, 00:29:24.293 "uuid": "9a13d330-423f-4d34-b290-23d3b71e4a5f", 00:29:24.293 "numa_id": 1, 00:29:24.293 "assigned_rate_limits": { 00:29:24.293 "rw_ios_per_sec": 0, 00:29:24.293 "rw_mbytes_per_sec": 0, 00:29:24.293 "r_mbytes_per_sec": 0, 00:29:24.293 "w_mbytes_per_sec": 0 00:29:24.293 }, 00:29:24.293 "claimed": false, 00:29:24.293 "zoned": false, 00:29:24.293 "supported_io_types": { 00:29:24.293 "read": true, 00:29:24.293 "write": true, 00:29:24.293 "unmap": true, 00:29:24.293 "flush": true, 00:29:24.293 "reset": true, 00:29:24.293 "nvme_admin": true, 00:29:24.293 "nvme_io": true, 00:29:24.293 "nvme_io_md": false, 00:29:24.293 "write_zeroes": true, 00:29:24.293 "zcopy": false, 00:29:24.293 "get_zone_info": false, 00:29:24.293 "zone_management": false, 00:29:24.293 "zone_append": false, 00:29:24.293 "compare": true, 00:29:24.293 "compare_and_write": true, 00:29:24.293 "abort": true, 00:29:24.294 "seek_hole": false, 00:29:24.294 "seek_data": false, 00:29:24.294 "copy": true, 00:29:24.294 "nvme_iov_md": false 00:29:24.294 }, 00:29:24.294 "memory_domains": [ 00:29:24.294 { 00:29:24.294 "dma_device_id": "system", 00:29:24.294 "dma_device_type": 1 00:29:24.294 } 00:29:24.294 ], 00:29:24.294 "driver_specific": { 00:29:24.294 "nvme": [ 00:29:24.294 { 00:29:24.294 "trid": { 00:29:24.294 "trtype": "TCP", 00:29:24.294 "adrfam": "IPv4", 00:29:24.294 "traddr": "10.0.0.2", 00:29:24.294 "trsvcid": "4420", 00:29:24.294 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:24.294 }, 00:29:24.294 "ctrlr_data": { 00:29:24.294 "cntlid": 1, 00:29:24.294 "vendor_id": "0x8086", 00:29:24.294 "model_number": "SPDK bdev Controller", 00:29:24.294 "serial_number": "SPDK0", 00:29:24.294 "firmware_revision": "25.01", 00:29:24.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.294 "oacs": { 00:29:24.294 "security": 0, 00:29:24.294 "format": 0, 00:29:24.294 "firmware": 0, 00:29:24.294 "ns_manage": 0 00:29:24.294 }, 00:29:24.294 "multi_ctrlr": true, 00:29:24.294 "ana_reporting": false 00:29:24.294 }, 00:29:24.294 "vs": { 00:29:24.294 "nvme_version": "1.3" 00:29:24.294 }, 00:29:24.294 "ns_data": { 00:29:24.294 "id": 1, 00:29:24.294 "can_share": true 00:29:24.294 } 00:29:24.294 } 00:29:24.294 ], 00:29:24.294 "mp_policy": "active_passive" 00:29:24.294 } 00:29:24.294 } 00:29:24.294 ] 00:29:24.294 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3107436 00:29:24.294 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:24.294 10:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:24.552 Running I/O for 10 seconds... 00:29:25.489 Latency(us) 00:29:25.489 [2024-11-20T09:01:48.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:25.489 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:25.489 [2024-11-20T09:01:48.821Z] =================================================================================================================== 00:29:25.489 [2024-11-20T09:01:48.821Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:25.489 00:29:26.426 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:26.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:26.426 Nvme0n1 : 2.00 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:26.426 [2024-11-20T09:01:49.758Z] =================================================================================================================== 00:29:26.426 [2024-11-20T09:01:49.758Z] Total : 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:26.426 00:29:26.685 true 00:29:26.685 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:26.685 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:26.685 10:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:26.685 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:26.685 10:01:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3107436 00:29:27.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.624 Nvme0n1 : 3.00 22373.33 87.40 0.00 0.00 0.00 0.00 0.00 00:29:27.624 [2024-11-20T09:01:50.956Z] =================================================================================================================== 00:29:27.624 [2024-11-20T09:01:50.956Z] Total : 22373.33 87.40 0.00 0.00 0.00 0.00 0.00 00:29:27.624 00:29:28.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.564 Nvme0n1 : 4.00 22491.25 87.86 0.00 0.00 0.00 0.00 0.00 00:29:28.564 [2024-11-20T09:01:51.896Z] =================================================================================================================== 00:29:28.564 [2024-11-20T09:01:51.896Z] Total : 22491.25 87.86 0.00 0.00 0.00 0.00 0.00 00:29:28.564 00:29:29.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.502 Nvme0n1 : 5.00 22590.40 88.24 0.00 0.00 0.00 0.00 0.00 00:29:29.502 [2024-11-20T09:01:52.834Z] =================================================================================================================== 00:29:29.502 [2024-11-20T09:01:52.834Z] Total : 22590.40 88.24 0.00 0.00 0.00 0.00 0.00 00:29:29.502 00:29:30.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:30.440 Nvme0n1 : 6.00 22646.00 88.46 0.00 0.00 0.00 0.00 0.00 00:29:30.440 [2024-11-20T09:01:53.772Z] =================================================================================================================== 00:29:30.440 [2024-11-20T09:01:53.772Z] Total : 22646.00 88.46 0.00 0.00 0.00 0.00 0.00 00:29:30.440 00:29:31.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.819 Nvme0n1 : 7.00 22694.71 88.65 0.00 0.00 0.00 0.00 0.00 00:29:31.819 [2024-11-20T09:01:55.151Z] =================================================================================================================== 00:29:31.819 [2024-11-20T09:01:55.151Z] Total : 22694.71 88.65 0.00 0.00 0.00 0.00 0.00 00:29:31.819 00:29:32.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.386 Nvme0n1 : 8.00 22731.25 88.79 0.00 0.00 0.00 0.00 0.00 00:29:32.386 [2024-11-20T09:01:55.718Z] =================================================================================================================== 00:29:32.386 [2024-11-20T09:01:55.718Z] Total : 22731.25 88.79 0.00 0.00 0.00 0.00 0.00 00:29:32.386 00:29:33.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.764 Nvme0n1 : 9.00 22759.67 88.90 0.00 0.00 0.00 0.00 0.00 00:29:33.764 [2024-11-20T09:01:57.096Z] =================================================================================================================== 00:29:33.764 [2024-11-20T09:01:57.096Z] Total : 22759.67 88.90 0.00 0.00 0.00 0.00 0.00 00:29:33.764 00:29:34.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.702 Nvme0n1 : 10.00 22769.70 88.94 0.00 0.00 0.00 0.00 0.00 00:29:34.702 [2024-11-20T09:01:58.034Z] =================================================================================================================== 00:29:34.702 [2024-11-20T09:01:58.034Z] Total : 22769.70 88.94 0.00 0.00 0.00 0.00 0.00 00:29:34.702 00:29:34.702 00:29:34.702 Latency(us) 00:29:34.702 [2024-11-20T09:01:58.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.702 Nvme0n1 : 10.01 22768.06 88.94 0.00 0.00 5618.81 3191.32 27126.21 00:29:34.702 [2024-11-20T09:01:58.034Z] =================================================================================================================== 00:29:34.702 [2024-11-20T09:01:58.034Z] Total : 22768.06 88.94 0.00 0.00 5618.81 3191.32 27126.21 00:29:34.702 { 00:29:34.702 "results": [ 00:29:34.702 { 00:29:34.702 "job": "Nvme0n1", 00:29:34.702 "core_mask": "0x2", 00:29:34.702 "workload": "randwrite", 00:29:34.702 "status": "finished", 00:29:34.702 "queue_depth": 128, 00:29:34.702 "io_size": 4096, 00:29:34.702 "runtime": 10.006341, 00:29:34.702 "iops": 22768.062771396657, 00:29:34.702 "mibps": 88.93774520076819, 00:29:34.702 "io_failed": 0, 00:29:34.702 "io_timeout": 0, 00:29:34.702 "avg_latency_us": 5618.811110205678, 00:29:34.702 "min_latency_us": 3191.318260869565, 00:29:34.702 "max_latency_us": 27126.205217391303 00:29:34.702 } 00:29:34.702 ], 00:29:34.702 "core_count": 1 00:29:34.702 } 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3107204 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3107204 ']' 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3107204 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3107204 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3107204' 00:29:34.702 killing process with pid 3107204 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3107204 00:29:34.702 Received shutdown signal, test time was about 10.000000 seconds 00:29:34.702 00:29:34.702 Latency(us) 00:29:34.702 [2024-11-20T09:01:58.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.702 [2024-11-20T09:01:58.034Z] =================================================================================================================== 00:29:34.702 [2024-11-20T09:01:58.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3107204 00:29:34.702 10:01:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:34.961 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:35.221 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:35.221 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:35.480 [2024-11-20 10:01:58.753165] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:35.480 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:35.739 request: 00:29:35.739 { 00:29:35.739 "uuid": "55f3c960-866b-4bcb-b78c-793d4c389c5f", 00:29:35.739 "method": "bdev_lvol_get_lvstores", 00:29:35.739 "req_id": 1 00:29:35.739 } 00:29:35.739 Got JSON-RPC error response 00:29:35.739 response: 00:29:35.739 { 00:29:35.739 "code": -19, 00:29:35.739 "message": "No such device" 00:29:35.739 } 00:29:35.739 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:35.740 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.740 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.740 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.740 10:01:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:35.998 aio_bdev 00:29:35.998 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9a13d330-423f-4d34-b290-23d3b71e4a5f 00:29:35.998 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=9a13d330-423f-4d34-b290-23d3b71e4a5f 00:29:35.998 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:35.998 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:35.998 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:35.998 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:35.998 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:36.258 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9a13d330-423f-4d34-b290-23d3b71e4a5f -t 2000 00:29:36.258 [ 00:29:36.258 { 00:29:36.258 "name": "9a13d330-423f-4d34-b290-23d3b71e4a5f", 00:29:36.258 "aliases": [ 00:29:36.258 "lvs/lvol" 00:29:36.258 ], 00:29:36.258 "product_name": "Logical Volume", 00:29:36.258 "block_size": 4096, 00:29:36.258 "num_blocks": 38912, 00:29:36.258 "uuid": "9a13d330-423f-4d34-b290-23d3b71e4a5f", 00:29:36.258 "assigned_rate_limits": { 00:29:36.258 "rw_ios_per_sec": 0, 00:29:36.258 "rw_mbytes_per_sec": 0, 00:29:36.258 "r_mbytes_per_sec": 0, 00:29:36.258 "w_mbytes_per_sec": 0 00:29:36.258 }, 00:29:36.258 "claimed": false, 00:29:36.258 "zoned": false, 00:29:36.258 "supported_io_types": { 00:29:36.258 "read": true, 00:29:36.258 "write": true, 00:29:36.258 "unmap": true, 00:29:36.258 "flush": false, 00:29:36.258 "reset": true, 00:29:36.258 "nvme_admin": false, 00:29:36.258 "nvme_io": false, 00:29:36.258 "nvme_io_md": false, 00:29:36.258 "write_zeroes": true, 00:29:36.258 "zcopy": false, 00:29:36.258 "get_zone_info": false, 00:29:36.258 "zone_management": false, 00:29:36.258 "zone_append": false, 00:29:36.258 "compare": false, 00:29:36.258 "compare_and_write": false, 00:29:36.258 "abort": false, 00:29:36.258 "seek_hole": true, 00:29:36.258 "seek_data": true, 00:29:36.258 "copy": false, 00:29:36.258 "nvme_iov_md": false 00:29:36.258 }, 00:29:36.258 "driver_specific": { 00:29:36.258 "lvol": { 00:29:36.258 "lvol_store_uuid": "55f3c960-866b-4bcb-b78c-793d4c389c5f", 00:29:36.258 "base_bdev": "aio_bdev", 00:29:36.258 "thin_provision": false, 00:29:36.258 "num_allocated_clusters": 38, 00:29:36.258 "snapshot": false, 00:29:36.258 "clone": false, 00:29:36.258 "esnap_clone": false 00:29:36.258 } 00:29:36.258 } 00:29:36.258 } 00:29:36.258 ] 00:29:36.517 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:36.517 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:36.517 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:36.517 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:36.517 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:36.517 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:36.776 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:36.776 10:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9a13d330-423f-4d34-b290-23d3b71e4a5f 00:29:37.035 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55f3c960-866b-4bcb-b78c-793d4c389c5f 00:29:37.294 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:37.294 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:37.553 00:29:37.553 real 0m16.420s 00:29:37.553 user 0m16.051s 00:29:37.553 sys 0m1.553s 00:29:37.553 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.553 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:37.554 ************************************ 00:29:37.554 END TEST lvs_grow_clean 00:29:37.554 ************************************ 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:37.554 ************************************ 00:29:37.554 START TEST lvs_grow_dirty 00:29:37.554 ************************************ 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:37.554 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:37.813 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:37.813 10:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:37.813 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:37.813 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:37.813 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:38.072 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:38.072 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:38.072 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b6f77959-3367-496b-a673-c7ffe89fa1b3 lvol 150 00:29:38.329 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3acab780-8ffa-4119-b137-6bd2d6d765f9 00:29:38.329 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:38.329 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:38.587 [2024-11-20 10:02:01.701102] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:38.587 [2024-11-20 10:02:01.701232] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:38.587 true 00:29:38.587 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:38.587 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:38.852 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:38.852 10:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:38.852 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3acab780-8ffa-4119-b137-6bd2d6d765f9 00:29:39.111 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:39.369 [2024-11-20 10:02:02.501537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.369 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3109916 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3109916 /var/tmp/bdevperf.sock 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3109916 ']' 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:39.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:39.630 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:39.630 [2024-11-20 10:02:02.766971] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:29:39.630 [2024-11-20 10:02:02.767022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3109916 ] 00:29:39.630 [2024-11-20 10:02:02.842698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.630 [2024-11-20 10:02:02.883370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.889 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.889 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:39.889 10:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:39.889 Nvme0n1 00:29:40.147 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:40.148 [ 00:29:40.148 { 00:29:40.148 "name": "Nvme0n1", 00:29:40.148 "aliases": [ 00:29:40.148 "3acab780-8ffa-4119-b137-6bd2d6d765f9" 00:29:40.148 ], 00:29:40.148 "product_name": "NVMe disk", 00:29:40.148 "block_size": 4096, 00:29:40.148 "num_blocks": 38912, 00:29:40.148 "uuid": "3acab780-8ffa-4119-b137-6bd2d6d765f9", 00:29:40.148 "numa_id": 1, 00:29:40.148 "assigned_rate_limits": { 00:29:40.148 "rw_ios_per_sec": 0, 00:29:40.148 "rw_mbytes_per_sec": 0, 00:29:40.148 "r_mbytes_per_sec": 0, 00:29:40.148 "w_mbytes_per_sec": 0 00:29:40.148 }, 00:29:40.148 "claimed": false, 00:29:40.148 "zoned": false, 00:29:40.148 "supported_io_types": { 00:29:40.148 "read": true, 00:29:40.148 "write": true, 00:29:40.148 "unmap": true, 00:29:40.148 "flush": true, 00:29:40.148 "reset": true, 00:29:40.148 "nvme_admin": true, 00:29:40.148 "nvme_io": true, 00:29:40.148 "nvme_io_md": false, 00:29:40.148 "write_zeroes": true, 00:29:40.148 "zcopy": false, 00:29:40.148 "get_zone_info": false, 00:29:40.148 "zone_management": false, 00:29:40.148 "zone_append": false, 00:29:40.148 "compare": true, 00:29:40.148 "compare_and_write": true, 00:29:40.148 "abort": true, 00:29:40.148 "seek_hole": false, 00:29:40.148 "seek_data": false, 00:29:40.148 "copy": true, 00:29:40.148 "nvme_iov_md": false 00:29:40.148 }, 00:29:40.148 "memory_domains": [ 00:29:40.148 { 00:29:40.148 "dma_device_id": "system", 00:29:40.148 "dma_device_type": 1 00:29:40.148 } 00:29:40.148 ], 00:29:40.148 "driver_specific": { 00:29:40.148 "nvme": [ 00:29:40.148 { 00:29:40.148 "trid": { 00:29:40.148 "trtype": "TCP", 00:29:40.148 "adrfam": "IPv4", 00:29:40.148 "traddr": "10.0.0.2", 00:29:40.148 "trsvcid": "4420", 00:29:40.148 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:40.148 }, 00:29:40.148 "ctrlr_data": { 00:29:40.148 "cntlid": 1, 00:29:40.148 "vendor_id": "0x8086", 00:29:40.148 "model_number": "SPDK bdev Controller", 00:29:40.148 "serial_number": "SPDK0", 00:29:40.148 "firmware_revision": "25.01", 00:29:40.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:40.148 "oacs": { 00:29:40.148 "security": 0, 00:29:40.148 "format": 0, 00:29:40.148 "firmware": 0, 00:29:40.148 "ns_manage": 0 00:29:40.148 }, 00:29:40.148 "multi_ctrlr": true, 00:29:40.148 "ana_reporting": false 00:29:40.148 }, 00:29:40.148 "vs": { 00:29:40.148 "nvme_version": "1.3" 00:29:40.148 }, 00:29:40.148 "ns_data": { 00:29:40.148 "id": 1, 00:29:40.148 "can_share": true 00:29:40.148 } 00:29:40.148 } 00:29:40.148 ], 00:29:40.148 "mp_policy": "active_passive" 00:29:40.148 } 00:29:40.148 } 00:29:40.148 ] 00:29:40.148 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3110027 00:29:40.148 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:40.148 10:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:40.407 Running I/O for 10 seconds... 00:29:41.343 Latency(us) 00:29:41.343 [2024-11-20T09:02:04.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.343 Nvme0n1 : 1.00 21197.00 82.80 0.00 0.00 0.00 0.00 0.00 00:29:41.343 [2024-11-20T09:02:04.675Z] =================================================================================================================== 00:29:41.344 [2024-11-20T09:02:04.676Z] Total : 21197.00 82.80 0.00 0.00 0.00 0.00 0.00 00:29:41.344 00:29:42.282 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:42.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.282 Nvme0n1 : 2.00 21606.50 84.40 0.00 0.00 0.00 0.00 0.00 00:29:42.282 [2024-11-20T09:02:05.614Z] =================================================================================================================== 00:29:42.282 [2024-11-20T09:02:05.614Z] Total : 21606.50 84.40 0.00 0.00 0.00 0.00 0.00 00:29:42.282 00:29:42.282 true 00:29:42.282 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:42.282 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:42.541 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:42.541 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:42.541 10:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3110027 00:29:43.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.478 Nvme0n1 : 3.00 21711.00 84.81 0.00 0.00 0.00 0.00 0.00 00:29:43.478 [2024-11-20T09:02:06.810Z] =================================================================================================================== 00:29:43.478 [2024-11-20T09:02:06.810Z] Total : 21711.00 84.81 0.00 0.00 0.00 0.00 0.00 00:29:43.478 00:29:44.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.416 Nvme0n1 : 4.00 21819.25 85.23 0.00 0.00 0.00 0.00 0.00 00:29:44.416 [2024-11-20T09:02:07.748Z] =================================================================================================================== 00:29:44.416 [2024-11-20T09:02:07.748Z] Total : 21819.25 85.23 0.00 0.00 0.00 0.00 0.00 00:29:44.416 00:29:45.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.352 Nvme0n1 : 5.00 21893.80 85.52 0.00 0.00 0.00 0.00 0.00 00:29:45.352 [2024-11-20T09:02:08.684Z] =================================================================================================================== 00:29:45.352 [2024-11-20T09:02:08.684Z] Total : 21893.80 85.52 0.00 0.00 0.00 0.00 0.00 00:29:45.352 00:29:46.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.291 Nvme0n1 : 6.00 21948.83 85.74 0.00 0.00 0.00 0.00 0.00 00:29:46.291 [2024-11-20T09:02:09.623Z] =================================================================================================================== 00:29:46.291 [2024-11-20T09:02:09.623Z] Total : 21948.83 85.74 0.00 0.00 0.00 0.00 0.00 00:29:46.291 00:29:47.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.229 Nvme0n1 : 7.00 21953.86 85.76 0.00 0.00 0.00 0.00 0.00 00:29:47.229 [2024-11-20T09:02:10.561Z] =================================================================================================================== 00:29:47.229 [2024-11-20T09:02:10.561Z] Total : 21953.86 85.76 0.00 0.00 0.00 0.00 0.00 00:29:47.229 00:29:48.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.611 Nvme0n1 : 8.00 21989.62 85.90 0.00 0.00 0.00 0.00 0.00 00:29:48.611 [2024-11-20T09:02:11.943Z] =================================================================================================================== 00:29:48.611 [2024-11-20T09:02:11.943Z] Total : 21989.62 85.90 0.00 0.00 0.00 0.00 0.00 00:29:48.611 00:29:49.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.547 Nvme0n1 : 9.00 22024.56 86.03 0.00 0.00 0.00 0.00 0.00 00:29:49.547 [2024-11-20T09:02:12.879Z] =================================================================================================================== 00:29:49.547 [2024-11-20T09:02:12.879Z] Total : 22024.56 86.03 0.00 0.00 0.00 0.00 0.00 00:29:49.547 00:29:50.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.483 Nvme0n1 : 10.00 22050.90 86.14 0.00 0.00 0.00 0.00 0.00 00:29:50.483 [2024-11-20T09:02:13.815Z] =================================================================================================================== 00:29:50.483 [2024-11-20T09:02:13.816Z] Total : 22050.90 86.14 0.00 0.00 0.00 0.00 0.00 00:29:50.484 00:29:50.484 00:29:50.484 Latency(us) 00:29:50.484 [2024-11-20T09:02:13.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.484 Nvme0n1 : 10.01 22049.69 86.13 0.00 0.00 5800.95 3761.20 21199.47 00:29:50.484 [2024-11-20T09:02:13.816Z] =================================================================================================================== 00:29:50.484 [2024-11-20T09:02:13.816Z] Total : 22049.69 86.13 0.00 0.00 5800.95 3761.20 21199.47 00:29:50.484 { 00:29:50.484 "results": [ 00:29:50.484 { 00:29:50.484 "job": "Nvme0n1", 00:29:50.484 "core_mask": "0x2", 00:29:50.484 "workload": "randwrite", 00:29:50.484 "status": "finished", 00:29:50.484 "queue_depth": 128, 00:29:50.484 "io_size": 4096, 00:29:50.484 "runtime": 10.005627, 00:29:50.484 "iops": 22049.692637952623, 00:29:50.484 "mibps": 86.13161186700243, 00:29:50.484 "io_failed": 0, 00:29:50.484 "io_timeout": 0, 00:29:50.484 "avg_latency_us": 5800.950750409466, 00:29:50.484 "min_latency_us": 3761.1965217391303, 00:29:50.484 "max_latency_us": 21199.471304347826 00:29:50.484 } 00:29:50.484 ], 00:29:50.484 "core_count": 1 00:29:50.484 } 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3109916 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3109916 ']' 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3109916 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3109916 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3109916' 00:29:50.484 killing process with pid 3109916 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3109916 00:29:50.484 Received shutdown signal, test time was about 10.000000 seconds 00:29:50.484 00:29:50.484 Latency(us) 00:29:50.484 [2024-11-20T09:02:13.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.484 [2024-11-20T09:02:13.816Z] =================================================================================================================== 00:29:50.484 [2024-11-20T09:02:13.816Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3109916 00:29:50.484 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:50.743 10:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:51.002 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:51.002 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3106703 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3106703 00:29:51.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3106703 Killed "${NVMF_APP[@]}" "$@" 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3111874 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3111874 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3111874 ']' 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.261 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:51.261 [2024-11-20 10:02:14.479605] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:51.261 [2024-11-20 10:02:14.480547] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:29:51.261 [2024-11-20 10:02:14.480587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.261 [2024-11-20 10:02:14.558324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.520 [2024-11-20 10:02:14.597863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.520 [2024-11-20 10:02:14.597901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.520 [2024-11-20 10:02:14.597908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.520 [2024-11-20 10:02:14.597914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.520 [2024-11-20 10:02:14.597919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.520 [2024-11-20 10:02:14.598473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.520 [2024-11-20 10:02:14.664645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:51.520 [2024-11-20 10:02:14.664855] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:51.520 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.520 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:51.520 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.520 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.520 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:51.520 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.520 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:51.780 [2024-11-20 10:02:14.919892] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:51.780 [2024-11-20 10:02:14.920108] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:51.780 [2024-11-20 10:02:14.920194] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:51.780 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:51.780 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3acab780-8ffa-4119-b137-6bd2d6d765f9 00:29:51.780 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3acab780-8ffa-4119-b137-6bd2d6d765f9 00:29:51.780 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:51.780 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:51.780 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:51.780 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:51.780 10:02:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:52.072 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3acab780-8ffa-4119-b137-6bd2d6d765f9 -t 2000 00:29:52.072 [ 00:29:52.072 { 00:29:52.072 "name": "3acab780-8ffa-4119-b137-6bd2d6d765f9", 00:29:52.072 "aliases": [ 00:29:52.072 "lvs/lvol" 00:29:52.072 ], 00:29:52.072 "product_name": "Logical Volume", 00:29:52.072 "block_size": 4096, 00:29:52.072 "num_blocks": 38912, 00:29:52.072 "uuid": "3acab780-8ffa-4119-b137-6bd2d6d765f9", 00:29:52.072 "assigned_rate_limits": { 00:29:52.072 "rw_ios_per_sec": 0, 00:29:52.072 "rw_mbytes_per_sec": 0, 00:29:52.072 "r_mbytes_per_sec": 0, 00:29:52.072 "w_mbytes_per_sec": 0 00:29:52.072 }, 00:29:52.072 "claimed": false, 00:29:52.072 "zoned": false, 00:29:52.072 "supported_io_types": { 00:29:52.072 "read": true, 00:29:52.072 "write": true, 00:29:52.072 "unmap": true, 00:29:52.072 "flush": false, 00:29:52.072 "reset": true, 00:29:52.072 "nvme_admin": false, 00:29:52.072 "nvme_io": false, 00:29:52.072 "nvme_io_md": false, 00:29:52.072 "write_zeroes": true, 00:29:52.072 "zcopy": false, 00:29:52.072 "get_zone_info": false, 00:29:52.072 "zone_management": false, 00:29:52.072 "zone_append": false, 00:29:52.072 "compare": false, 00:29:52.072 "compare_and_write": false, 00:29:52.072 "abort": false, 00:29:52.072 "seek_hole": true, 00:29:52.072 "seek_data": true, 00:29:52.072 "copy": false, 00:29:52.072 "nvme_iov_md": false 00:29:52.072 }, 00:29:52.072 "driver_specific": { 00:29:52.072 "lvol": { 00:29:52.072 "lvol_store_uuid": "b6f77959-3367-496b-a673-c7ffe89fa1b3", 00:29:52.072 "base_bdev": "aio_bdev", 00:29:52.072 "thin_provision": false, 00:29:52.072 "num_allocated_clusters": 38, 00:29:52.072 "snapshot": false, 00:29:52.072 "clone": false, 00:29:52.072 "esnap_clone": false 00:29:52.072 } 00:29:52.072 } 00:29:52.072 } 00:29:52.072 ] 00:29:52.072 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:52.072 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:52.072 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:52.419 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:52.419 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:52.419 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:52.768 [2024-11-20 10:02:15.935060] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:52.768 10:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:53.028 request: 00:29:53.028 { 00:29:53.028 "uuid": "b6f77959-3367-496b-a673-c7ffe89fa1b3", 00:29:53.028 "method": "bdev_lvol_get_lvstores", 00:29:53.028 "req_id": 1 00:29:53.028 } 00:29:53.028 Got JSON-RPC error response 00:29:53.028 response: 00:29:53.028 { 00:29:53.028 "code": -19, 00:29:53.028 "message": "No such device" 00:29:53.028 } 00:29:53.028 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:53.028 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:53.028 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:53.028 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:53.028 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:53.028 aio_bdev 00:29:53.287 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3acab780-8ffa-4119-b137-6bd2d6d765f9 00:29:53.287 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3acab780-8ffa-4119-b137-6bd2d6d765f9 00:29:53.287 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:53.287 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:53.287 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:53.287 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:53.287 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:53.287 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3acab780-8ffa-4119-b137-6bd2d6d765f9 -t 2000 00:29:53.546 [ 00:29:53.546 { 00:29:53.546 "name": "3acab780-8ffa-4119-b137-6bd2d6d765f9", 00:29:53.546 "aliases": [ 00:29:53.546 "lvs/lvol" 00:29:53.546 ], 00:29:53.546 "product_name": "Logical Volume", 00:29:53.546 "block_size": 4096, 00:29:53.546 "num_blocks": 38912, 00:29:53.546 "uuid": "3acab780-8ffa-4119-b137-6bd2d6d765f9", 00:29:53.546 "assigned_rate_limits": { 00:29:53.546 "rw_ios_per_sec": 0, 00:29:53.546 "rw_mbytes_per_sec": 0, 00:29:53.546 "r_mbytes_per_sec": 0, 00:29:53.546 "w_mbytes_per_sec": 0 00:29:53.546 }, 00:29:53.546 "claimed": false, 00:29:53.546 "zoned": false, 00:29:53.546 "supported_io_types": { 00:29:53.546 "read": true, 00:29:53.546 "write": true, 00:29:53.546 "unmap": true, 00:29:53.546 "flush": false, 00:29:53.546 "reset": true, 00:29:53.546 "nvme_admin": false, 00:29:53.546 "nvme_io": false, 00:29:53.546 "nvme_io_md": false, 00:29:53.546 "write_zeroes": true, 00:29:53.546 "zcopy": false, 00:29:53.546 "get_zone_info": false, 00:29:53.546 "zone_management": false, 00:29:53.546 "zone_append": false, 00:29:53.546 "compare": false, 00:29:53.546 "compare_and_write": false, 00:29:53.546 "abort": false, 00:29:53.546 "seek_hole": true, 00:29:53.546 "seek_data": true, 00:29:53.546 "copy": false, 00:29:53.546 "nvme_iov_md": false 00:29:53.546 }, 00:29:53.546 "driver_specific": { 00:29:53.546 "lvol": { 00:29:53.546 "lvol_store_uuid": "b6f77959-3367-496b-a673-c7ffe89fa1b3", 00:29:53.546 "base_bdev": "aio_bdev", 00:29:53.546 "thin_provision": false, 00:29:53.546 "num_allocated_clusters": 38, 00:29:53.546 "snapshot": false, 00:29:53.546 "clone": false, 00:29:53.546 "esnap_clone": false 00:29:53.546 } 00:29:53.546 } 00:29:53.546 } 00:29:53.546 ] 00:29:53.546 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:53.546 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:53.546 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:53.805 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:53.805 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:53.805 10:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:54.064 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:54.064 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3acab780-8ffa-4119-b137-6bd2d6d765f9 00:29:54.064 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6f77959-3367-496b-a673-c7ffe89fa1b3 00:29:54.323 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:54.582 00:29:54.582 real 0m17.110s 00:29:54.582 user 0m34.358s 00:29:54.582 sys 0m4.000s 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:54.582 ************************************ 00:29:54.582 END TEST lvs_grow_dirty 00:29:54.582 ************************************ 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:54.582 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:54.583 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:54.583 nvmf_trace.0 00:29:54.583 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:54.583 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:54.583 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:54.583 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:54.583 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.583 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:54.583 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.583 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.583 rmmod nvme_tcp 00:29:54.583 rmmod nvme_fabrics 00:29:54.842 rmmod nvme_keyring 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3111874 ']' 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3111874 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3111874 ']' 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3111874 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111874 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111874' 00:29:54.842 killing process with pid 3111874 00:29:54.842 10:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3111874 00:29:54.842 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3111874 00:29:54.842 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:54.842 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:54.842 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:54.842 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:54.842 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:54.842 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:54.842 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.101 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.101 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.101 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.101 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.101 10:02:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.007 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.007 00:29:57.007 real 0m42.741s 00:29:57.007 user 0m52.910s 00:29:57.007 sys 0m10.507s 00:29:57.007 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.007 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:57.007 ************************************ 00:29:57.007 END TEST nvmf_lvs_grow 00:29:57.007 ************************************ 00:29:57.007 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:57.007 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:57.007 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.007 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:57.007 ************************************ 00:29:57.007 START TEST nvmf_bdev_io_wait 00:29:57.007 ************************************ 00:29:57.007 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:57.267 * Looking for test storage... 00:29:57.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # lcov --version 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:29:57.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.267 --rc genhtml_branch_coverage=1 00:29:57.267 --rc genhtml_function_coverage=1 00:29:57.267 --rc genhtml_legend=1 00:29:57.267 --rc geninfo_all_blocks=1 00:29:57.267 --rc geninfo_unexecuted_blocks=1 00:29:57.267 00:29:57.267 ' 00:29:57.267 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:29:57.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.268 --rc genhtml_branch_coverage=1 00:29:57.268 --rc genhtml_function_coverage=1 00:29:57.268 --rc genhtml_legend=1 00:29:57.268 --rc geninfo_all_blocks=1 00:29:57.268 --rc geninfo_unexecuted_blocks=1 00:29:57.268 00:29:57.268 ' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:29:57.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.268 --rc genhtml_branch_coverage=1 00:29:57.268 --rc genhtml_function_coverage=1 00:29:57.268 --rc genhtml_legend=1 00:29:57.268 --rc geninfo_all_blocks=1 00:29:57.268 --rc geninfo_unexecuted_blocks=1 00:29:57.268 00:29:57.268 ' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:29:57.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.268 --rc genhtml_branch_coverage=1 00:29:57.268 --rc genhtml_function_coverage=1 00:29:57.268 --rc genhtml_legend=1 00:29:57.268 --rc geninfo_all_blocks=1 00:29:57.268 --rc geninfo_unexecuted_blocks=1 00:29:57.268 00:29:57.268 ' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.268 10:02:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.879 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:03.880 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:03.880 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:03.880 Found net devices under 0000:86:00.0: cvl_0_0 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:03.880 Found net devices under 0000:86:00.1: cvl_0_1 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:03.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:30:03.880 00:30:03.880 --- 10.0.0.2 ping statistics --- 00:30:03.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.880 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:30:03.880 00:30:03.880 --- 10.0.0.1 ping statistics --- 00:30:03.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.880 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:03.880 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3115933 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3115933 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3115933 ']' 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 [2024-11-20 10:02:26.453397] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:03.881 [2024-11-20 10:02:26.454358] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:03.881 [2024-11-20 10:02:26.454395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.881 [2024-11-20 10:02:26.534170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:03.881 [2024-11-20 10:02:26.578287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.881 [2024-11-20 10:02:26.578324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.881 [2024-11-20 10:02:26.578333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.881 [2024-11-20 10:02:26.578339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.881 [2024-11-20 10:02:26.578344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.881 [2024-11-20 10:02:26.579909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.881 [2024-11-20 10:02:26.580021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.881 [2024-11-20 10:02:26.580128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.881 [2024-11-20 10:02:26.580129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:03.881 [2024-11-20 10:02:26.580390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 [2024-11-20 10:02:26.706048] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:03.881 [2024-11-20 10:02:26.706680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:03.881 [2024-11-20 10:02:26.706929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:03.881 [2024-11-20 10:02:26.707071] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 [2024-11-20 10:02:26.716763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 Malloc0 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:03.881 [2024-11-20 10:02:26.789093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3115955 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3115957 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.881 { 00:30:03.881 "params": { 00:30:03.881 "name": "Nvme$subsystem", 00:30:03.881 "trtype": "$TEST_TRANSPORT", 00:30:03.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.881 "adrfam": "ipv4", 00:30:03.881 "trsvcid": "$NVMF_PORT", 00:30:03.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.881 "hdgst": ${hdgst:-false}, 00:30:03.881 "ddgst": ${ddgst:-false} 00:30:03.881 }, 00:30:03.881 "method": "bdev_nvme_attach_controller" 00:30:03.881 } 00:30:03.881 EOF 00:30:03.881 )") 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3115959 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:03.881 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.881 { 00:30:03.881 "params": { 00:30:03.881 "name": "Nvme$subsystem", 00:30:03.881 "trtype": "$TEST_TRANSPORT", 00:30:03.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.881 "adrfam": "ipv4", 00:30:03.881 "trsvcid": "$NVMF_PORT", 00:30:03.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.881 "hdgst": ${hdgst:-false}, 00:30:03.881 "ddgst": ${ddgst:-false} 00:30:03.881 }, 00:30:03.881 "method": "bdev_nvme_attach_controller" 00:30:03.881 } 00:30:03.882 EOF 00:30:03.882 )") 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3115962 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.882 { 00:30:03.882 "params": { 00:30:03.882 "name": "Nvme$subsystem", 00:30:03.882 "trtype": "$TEST_TRANSPORT", 00:30:03.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.882 "adrfam": "ipv4", 00:30:03.882 "trsvcid": "$NVMF_PORT", 00:30:03.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.882 "hdgst": ${hdgst:-false}, 00:30:03.882 "ddgst": ${ddgst:-false} 00:30:03.882 }, 00:30:03.882 "method": "bdev_nvme_attach_controller" 00:30:03.882 } 00:30:03.882 EOF 00:30:03.882 )") 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.882 { 00:30:03.882 "params": { 00:30:03.882 "name": "Nvme$subsystem", 00:30:03.882 "trtype": "$TEST_TRANSPORT", 00:30:03.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.882 "adrfam": "ipv4", 00:30:03.882 "trsvcid": "$NVMF_PORT", 00:30:03.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.882 "hdgst": ${hdgst:-false}, 00:30:03.882 "ddgst": ${ddgst:-false} 00:30:03.882 }, 00:30:03.882 "method": "bdev_nvme_attach_controller" 00:30:03.882 } 00:30:03.882 EOF 00:30:03.882 )") 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3115955 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.882 "params": { 00:30:03.882 "name": "Nvme1", 00:30:03.882 "trtype": "tcp", 00:30:03.882 "traddr": "10.0.0.2", 00:30:03.882 "adrfam": "ipv4", 00:30:03.882 "trsvcid": "4420", 00:30:03.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.882 "hdgst": false, 00:30:03.882 "ddgst": false 00:30:03.882 }, 00:30:03.882 "method": "bdev_nvme_attach_controller" 00:30:03.882 }' 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.882 "params": { 00:30:03.882 "name": "Nvme1", 00:30:03.882 "trtype": "tcp", 00:30:03.882 "traddr": "10.0.0.2", 00:30:03.882 "adrfam": "ipv4", 00:30:03.882 "trsvcid": "4420", 00:30:03.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.882 "hdgst": false, 00:30:03.882 "ddgst": false 00:30:03.882 }, 00:30:03.882 "method": "bdev_nvme_attach_controller" 00:30:03.882 }' 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.882 "params": { 00:30:03.882 "name": "Nvme1", 00:30:03.882 "trtype": "tcp", 00:30:03.882 "traddr": "10.0.0.2", 00:30:03.882 "adrfam": "ipv4", 00:30:03.882 "trsvcid": "4420", 00:30:03.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.882 "hdgst": false, 00:30:03.882 "ddgst": false 00:30:03.882 }, 00:30:03.882 "method": "bdev_nvme_attach_controller" 00:30:03.882 }' 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:03.882 10:02:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.882 "params": { 00:30:03.882 "name": "Nvme1", 00:30:03.882 "trtype": "tcp", 00:30:03.882 "traddr": "10.0.0.2", 00:30:03.882 "adrfam": "ipv4", 00:30:03.882 "trsvcid": "4420", 00:30:03.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:03.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:03.882 "hdgst": false, 00:30:03.882 "ddgst": false 00:30:03.882 }, 00:30:03.882 "method": "bdev_nvme_attach_controller" 00:30:03.882 }' 00:30:03.882 [2024-11-20 10:02:26.839049] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:03.882 [2024-11-20 10:02:26.839100] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:03.882 [2024-11-20 10:02:26.842107] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:03.882 [2024-11-20 10:02:26.842149] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:03.882 [2024-11-20 10:02:26.842527] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:03.882 [2024-11-20 10:02:26.842564] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:03.882 [2024-11-20 10:02:26.846021] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:03.882 [2024-11-20 10:02:26.846069] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:03.882 [2024-11-20 10:02:27.021143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.882 [2024-11-20 10:02:27.064811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:03.882 [2024-11-20 10:02:27.119430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.882 [2024-11-20 10:02:27.162324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:04.142 [2024-11-20 10:02:27.213771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.142 [2024-11-20 10:02:27.257420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.142 [2024-11-20 10:02:27.264799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:04.142 [2024-11-20 10:02:27.300404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:04.142 Running I/O for 1 seconds... 00:30:04.142 Running I/O for 1 seconds... 00:30:04.142 Running I/O for 1 seconds... 00:30:04.401 Running I/O for 1 seconds... 00:30:05.338 8858.00 IOPS, 34.60 MiB/s 00:30:05.338 Latency(us) 00:30:05.338 [2024-11-20T09:02:28.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.338 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:05.338 Nvme1n1 : 1.02 8859.42 34.61 0.00 0.00 14345.33 1631.28 23592.96 00:30:05.338 [2024-11-20T09:02:28.670Z] =================================================================================================================== 00:30:05.338 [2024-11-20T09:02:28.670Z] Total : 8859.42 34.61 0.00 0.00 14345.33 1631.28 23592.96 00:30:05.338 246256.00 IOPS, 961.94 MiB/s 00:30:05.338 Latency(us) 00:30:05.338 [2024-11-20T09:02:28.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.338 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:05.338 Nvme1n1 : 1.00 245870.36 960.43 0.00 0.00 517.49 233.29 1538.67 00:30:05.338 [2024-11-20T09:02:28.670Z] =================================================================================================================== 00:30:05.338 [2024-11-20T09:02:28.670Z] Total : 245870.36 960.43 0.00 0.00 517.49 233.29 1538.67 00:30:05.338 8007.00 IOPS, 31.28 MiB/s [2024-11-20T09:02:28.670Z] 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3115957 00:30:05.338 00:30:05.338 Latency(us) 00:30:05.338 [2024-11-20T09:02:28.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.338 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:05.338 Nvme1n1 : 1.01 8094.28 31.62 0.00 0.00 15767.72 4673.00 25986.45 00:30:05.338 [2024-11-20T09:02:28.670Z] =================================================================================================================== 00:30:05.338 [2024-11-20T09:02:28.670Z] Total : 8094.28 31.62 0.00 0.00 15767.72 4673.00 25986.45 00:30:05.338 13683.00 IOPS, 53.45 MiB/s 00:30:05.338 Latency(us) 00:30:05.338 [2024-11-20T09:02:28.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.338 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:05.338 Nvme1n1 : 1.00 13757.03 53.74 0.00 0.00 9282.09 1937.59 14246.96 00:30:05.338 [2024-11-20T09:02:28.670Z] =================================================================================================================== 00:30:05.338 [2024-11-20T09:02:28.670Z] Total : 13757.03 53.74 0.00 0.00 9282.09 1937.59 14246.96 00:30:05.338 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3115959 00:30:05.338 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3115962 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:05.598 rmmod nvme_tcp 00:30:05.598 rmmod nvme_fabrics 00:30:05.598 rmmod nvme_keyring 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3115933 ']' 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3115933 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3115933 ']' 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3115933 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3115933 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3115933' 00:30:05.598 killing process with pid 3115933 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3115933 00:30:05.598 10:02:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3115933 00:30:05.858 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:05.858 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:05.858 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:05.858 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:05.858 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:05.858 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:05.858 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:05.858 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:05.859 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:05.859 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.859 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.859 10:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.764 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:07.764 00:30:07.764 real 0m10.771s 00:30:07.764 user 0m15.324s 00:30:07.764 sys 0m6.346s 00:30:07.764 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.764 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:07.764 ************************************ 00:30:07.764 END TEST nvmf_bdev_io_wait 00:30:07.764 ************************************ 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:08.024 ************************************ 00:30:08.024 START TEST nvmf_queue_depth 00:30:08.024 ************************************ 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:08.024 * Looking for test storage... 00:30:08.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # lcov --version 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:30:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.024 --rc genhtml_branch_coverage=1 00:30:08.024 --rc genhtml_function_coverage=1 00:30:08.024 --rc genhtml_legend=1 00:30:08.024 --rc geninfo_all_blocks=1 00:30:08.024 --rc geninfo_unexecuted_blocks=1 00:30:08.024 00:30:08.024 ' 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:30:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.024 --rc genhtml_branch_coverage=1 00:30:08.024 --rc genhtml_function_coverage=1 00:30:08.024 --rc genhtml_legend=1 00:30:08.024 --rc geninfo_all_blocks=1 00:30:08.024 --rc geninfo_unexecuted_blocks=1 00:30:08.024 00:30:08.024 ' 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:30:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.024 --rc genhtml_branch_coverage=1 00:30:08.024 --rc genhtml_function_coverage=1 00:30:08.024 --rc genhtml_legend=1 00:30:08.024 --rc geninfo_all_blocks=1 00:30:08.024 --rc geninfo_unexecuted_blocks=1 00:30:08.024 00:30:08.024 ' 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:30:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.024 --rc genhtml_branch_coverage=1 00:30:08.024 --rc genhtml_function_coverage=1 00:30:08.024 --rc genhtml_legend=1 00:30:08.024 --rc geninfo_all_blocks=1 00:30:08.024 --rc geninfo_unexecuted_blocks=1 00:30:08.024 00:30:08.024 ' 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.024 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.025 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.025 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.285 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.286 10:02:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:14.858 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:14.858 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:14.858 Found net devices under 0000:86:00.0: cvl_0_0 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:14.858 Found net devices under 0000:86:00.1: cvl_0_1 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.858 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.859 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.859 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.859 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.859 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.859 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.859 10:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:30:14.859 00:30:14.859 --- 10.0.0.2 ping statistics --- 00:30:14.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.859 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:14.859 00:30:14.859 --- 10.0.0.1 ping statistics --- 00:30:14.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.859 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3119737 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3119737 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3119737 ']' 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.859 10:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:14.859 [2024-11-20 10:02:37.308601] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:14.859 [2024-11-20 10:02:37.309529] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:14.859 [2024-11-20 10:02:37.309563] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.859 [2024-11-20 10:02:37.392816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.859 [2024-11-20 10:02:37.435216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.859 [2024-11-20 10:02:37.435250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.859 [2024-11-20 10:02:37.435257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.859 [2024-11-20 10:02:37.435266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.859 [2024-11-20 10:02:37.435272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.859 [2024-11-20 10:02:37.435812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.859 [2024-11-20 10:02:37.502610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:14.859 [2024-11-20 10:02:37.502834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:14.859 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.859 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:14.859 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.859 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.859 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.118 [2024-11-20 10:02:38.200497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.118 Malloc0 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.118 [2024-11-20 10:02:38.280526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3119980 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3119980 /var/tmp/bdevperf.sock 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3119980 ']' 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.118 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.118 [2024-11-20 10:02:38.332228] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:15.118 [2024-11-20 10:02:38.332278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3119980 ] 00:30:15.118 [2024-11-20 10:02:38.406003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.118 [2024-11-20 10:02:38.447383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.378 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.378 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:15.378 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:15.378 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.378 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.637 NVMe0n1 00:30:15.637 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.637 10:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:15.637 Running I/O for 10 seconds... 00:30:17.563 11362.00 IOPS, 44.38 MiB/s [2024-11-20T09:02:42.273Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-20T09:02:42.843Z] 11948.00 IOPS, 46.67 MiB/s [2024-11-20T09:02:44.223Z] 12039.50 IOPS, 47.03 MiB/s [2024-11-20T09:02:45.159Z] 12098.40 IOPS, 47.26 MiB/s [2024-11-20T09:02:46.096Z] 12176.17 IOPS, 47.56 MiB/s [2024-11-20T09:02:47.033Z] 12198.43 IOPS, 47.65 MiB/s [2024-11-20T09:02:47.970Z] 12220.00 IOPS, 47.73 MiB/s [2024-11-20T09:02:48.909Z] 12223.33 IOPS, 47.75 MiB/s [2024-11-20T09:02:49.167Z] 12241.20 IOPS, 47.82 MiB/s 00:30:25.835 Latency(us) 00:30:25.835 [2024-11-20T09:02:49.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.835 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:25.835 Verification LBA range: start 0x0 length 0x4000 00:30:25.835 NVMe0n1 : 10.09 12215.43 47.72 0.00 0.00 83192.44 16868.40 56987.83 00:30:25.835 [2024-11-20T09:02:49.167Z] =================================================================================================================== 00:30:25.835 [2024-11-20T09:02:49.167Z] Total : 12215.43 47.72 0.00 0.00 83192.44 16868.40 56987.83 00:30:25.835 { 00:30:25.835 "results": [ 00:30:25.835 { 00:30:25.835 "job": "NVMe0n1", 00:30:25.835 "core_mask": "0x1", 00:30:25.835 "workload": "verify", 00:30:25.835 "status": "finished", 00:30:25.835 "verify_range": { 00:30:25.835 "start": 0, 00:30:25.835 "length": 16384 00:30:25.835 }, 00:30:25.835 "queue_depth": 1024, 00:30:25.835 "io_size": 4096, 00:30:25.835 "runtime": 10.094525, 00:30:25.835 "iops": 12215.433613765877, 00:30:25.835 "mibps": 47.71653755377296, 00:30:25.835 "io_failed": 0, 00:30:25.835 "io_timeout": 0, 00:30:25.835 "avg_latency_us": 83192.43534097973, 00:30:25.835 "min_latency_us": 16868.39652173913, 00:30:25.835 "max_latency_us": 56987.82608695652 00:30:25.835 } 00:30:25.835 ], 00:30:25.835 "core_count": 1 00:30:25.835 } 00:30:25.835 10:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3119980 00:30:25.835 10:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3119980 ']' 00:30:25.835 10:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3119980 00:30:25.835 10:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:25.835 10:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.835 10:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119980 00:30:25.835 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:25.835 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:25.835 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119980' 00:30:25.835 killing process with pid 3119980 00:30:25.836 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3119980 00:30:25.836 Received shutdown signal, test time was about 10.000000 seconds 00:30:25.836 00:30:25.836 Latency(us) 00:30:25.836 [2024-11-20T09:02:49.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.836 [2024-11-20T09:02:49.168Z] =================================================================================================================== 00:30:25.836 [2024-11-20T09:02:49.168Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.836 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3119980 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.096 rmmod nvme_tcp 00:30:26.096 rmmod nvme_fabrics 00:30:26.096 rmmod nvme_keyring 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:26.096 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3119737 ']' 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3119737 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3119737 ']' 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3119737 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119737 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119737' 00:30:26.097 killing process with pid 3119737 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3119737 00:30:26.097 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3119737 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.356 10:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.262 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:28.262 00:30:28.262 real 0m20.380s 00:30:28.262 user 0m22.968s 00:30:28.262 sys 0m6.386s 00:30:28.262 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:28.262 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:28.262 ************************************ 00:30:28.262 END TEST nvmf_queue_depth 00:30:28.262 ************************************ 00:30:28.262 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:28.262 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:28.262 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:28.262 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:28.521 ************************************ 00:30:28.521 START TEST nvmf_target_multipath 00:30:28.521 ************************************ 00:30:28.521 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:28.521 * Looking for test storage... 00:30:28.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:28.521 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:30:28.521 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # lcov --version 00:30:28.521 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:30:28.521 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:30:28.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.522 --rc genhtml_branch_coverage=1 00:30:28.522 --rc genhtml_function_coverage=1 00:30:28.522 --rc genhtml_legend=1 00:30:28.522 --rc geninfo_all_blocks=1 00:30:28.522 --rc geninfo_unexecuted_blocks=1 00:30:28.522 00:30:28.522 ' 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:30:28.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.522 --rc genhtml_branch_coverage=1 00:30:28.522 --rc genhtml_function_coverage=1 00:30:28.522 --rc genhtml_legend=1 00:30:28.522 --rc geninfo_all_blocks=1 00:30:28.522 --rc geninfo_unexecuted_blocks=1 00:30:28.522 00:30:28.522 ' 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:30:28.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.522 --rc genhtml_branch_coverage=1 00:30:28.522 --rc genhtml_function_coverage=1 00:30:28.522 --rc genhtml_legend=1 00:30:28.522 --rc geninfo_all_blocks=1 00:30:28.522 --rc geninfo_unexecuted_blocks=1 00:30:28.522 00:30:28.522 ' 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:30:28.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:28.522 --rc genhtml_branch_coverage=1 00:30:28.522 --rc genhtml_function_coverage=1 00:30:28.522 --rc genhtml_legend=1 00:30:28.522 --rc geninfo_all_blocks=1 00:30:28.522 --rc geninfo_unexecuted_blocks=1 00:30:28.522 00:30:28.522 ' 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.522 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:28.523 10:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:35.095 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:35.095 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:35.095 Found net devices under 0000:86:00.0: cvl_0_0 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:35.095 Found net devices under 0000:86:00.1: cvl_0_1 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.095 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:30:35.096 00:30:35.096 --- 10.0.0.2 ping statistics --- 00:30:35.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.096 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:35.096 00:30:35.096 --- 10.0.0.1 ping statistics --- 00:30:35.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.096 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:35.096 only one NIC for nvmf test 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.096 rmmod nvme_tcp 00:30:35.096 rmmod nvme_fabrics 00:30:35.096 rmmod nvme_keyring 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.096 10:02:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.001 00:30:37.001 real 0m8.310s 00:30:37.001 user 0m1.850s 00:30:37.001 sys 0m4.478s 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:37.001 ************************************ 00:30:37.001 END TEST nvmf_target_multipath 00:30:37.001 ************************************ 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:37.001 ************************************ 00:30:37.001 START TEST nvmf_zcopy 00:30:37.001 ************************************ 00:30:37.001 10:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:37.001 * Looking for test storage... 00:30:37.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1703 -- # lcov --version 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.001 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:30:37.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.002 --rc genhtml_branch_coverage=1 00:30:37.002 --rc genhtml_function_coverage=1 00:30:37.002 --rc genhtml_legend=1 00:30:37.002 --rc geninfo_all_blocks=1 00:30:37.002 --rc geninfo_unexecuted_blocks=1 00:30:37.002 00:30:37.002 ' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:30:37.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.002 --rc genhtml_branch_coverage=1 00:30:37.002 --rc genhtml_function_coverage=1 00:30:37.002 --rc genhtml_legend=1 00:30:37.002 --rc geninfo_all_blocks=1 00:30:37.002 --rc geninfo_unexecuted_blocks=1 00:30:37.002 00:30:37.002 ' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:30:37.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.002 --rc genhtml_branch_coverage=1 00:30:37.002 --rc genhtml_function_coverage=1 00:30:37.002 --rc genhtml_legend=1 00:30:37.002 --rc geninfo_all_blocks=1 00:30:37.002 --rc geninfo_unexecuted_blocks=1 00:30:37.002 00:30:37.002 ' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:30:37.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.002 --rc genhtml_branch_coverage=1 00:30:37.002 --rc genhtml_function_coverage=1 00:30:37.002 --rc genhtml_legend=1 00:30:37.002 --rc geninfo_all_blocks=1 00:30:37.002 --rc geninfo_unexecuted_blocks=1 00:30:37.002 00:30:37.002 ' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:37.002 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.003 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.003 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.003 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:37.003 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:37.003 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.003 10:03:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:43.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.572 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:43.573 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:43.573 Found net devices under 0000:86:00.0: cvl_0_0 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:43.573 Found net devices under 0000:86:00.1: cvl_0_1 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:43.573 10:03:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:43.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:30:43.573 00:30:43.573 --- 10.0.0.2 ping statistics --- 00:30:43.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.573 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:30:43.573 00:30:43.573 --- 10.0.0.1 ping statistics --- 00:30:43.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.573 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3128746 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3128746 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3128746 ']' 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.573 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.573 [2024-11-20 10:03:06.157227] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:43.573 [2024-11-20 10:03:06.158196] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:43.573 [2024-11-20 10:03:06.158236] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.573 [2024-11-20 10:03:06.241640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.573 [2024-11-20 10:03:06.283613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.573 [2024-11-20 10:03:06.283650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.573 [2024-11-20 10:03:06.283658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.573 [2024-11-20 10:03:06.283664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.573 [2024-11-20 10:03:06.283669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.574 [2024-11-20 10:03:06.284232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.574 [2024-11-20 10:03:06.351716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:43.574 [2024-11-20 10:03:06.351961] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:43.833 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.833 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:43.833 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:43.833 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:43.833 10:03:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 [2024-11-20 10:03:07.040892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 [2024-11-20 10:03:07.069134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 malloc0 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:43.833 { 00:30:43.833 "params": { 00:30:43.833 "name": "Nvme$subsystem", 00:30:43.833 "trtype": "$TEST_TRANSPORT", 00:30:43.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.833 "adrfam": "ipv4", 00:30:43.833 "trsvcid": "$NVMF_PORT", 00:30:43.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.833 "hdgst": ${hdgst:-false}, 00:30:43.833 "ddgst": ${ddgst:-false} 00:30:43.833 }, 00:30:43.833 "method": "bdev_nvme_attach_controller" 00:30:43.833 } 00:30:43.833 EOF 00:30:43.833 )") 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:43.833 10:03:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:43.833 "params": { 00:30:43.833 "name": "Nvme1", 00:30:43.833 "trtype": "tcp", 00:30:43.833 "traddr": "10.0.0.2", 00:30:43.833 "adrfam": "ipv4", 00:30:43.833 "trsvcid": "4420", 00:30:43.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:43.833 "hdgst": false, 00:30:43.833 "ddgst": false 00:30:43.833 }, 00:30:43.833 "method": "bdev_nvme_attach_controller" 00:30:43.833 }' 00:30:44.091 [2024-11-20 10:03:07.166727] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:44.091 [2024-11-20 10:03:07.166772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128995 ] 00:30:44.091 [2024-11-20 10:03:07.243892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.091 [2024-11-20 10:03:07.285312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.350 Running I/O for 10 seconds... 00:30:46.223 8122.00 IOPS, 63.45 MiB/s [2024-11-20T09:03:10.928Z] 8264.00 IOPS, 64.56 MiB/s [2024-11-20T09:03:11.862Z] 8256.67 IOPS, 64.51 MiB/s [2024-11-20T09:03:12.797Z] 8295.75 IOPS, 64.81 MiB/s [2024-11-20T09:03:13.732Z] 8320.60 IOPS, 65.00 MiB/s [2024-11-20T09:03:14.729Z] 8336.17 IOPS, 65.13 MiB/s [2024-11-20T09:03:15.712Z] 8343.00 IOPS, 65.18 MiB/s [2024-11-20T09:03:16.649Z] 8349.75 IOPS, 65.23 MiB/s [2024-11-20T09:03:17.587Z] 8361.11 IOPS, 65.32 MiB/s [2024-11-20T09:03:17.587Z] 8368.40 IOPS, 65.38 MiB/s 00:30:54.255 Latency(us) 00:30:54.255 [2024-11-20T09:03:17.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.255 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:54.255 Verification LBA range: start 0x0 length 0x1000 00:30:54.255 Nvme1n1 : 10.01 8370.42 65.39 0.00 0.00 15248.44 1481.68 21883.33 00:30:54.255 [2024-11-20T09:03:17.587Z] =================================================================================================================== 00:30:54.255 [2024-11-20T09:03:17.587Z] Total : 8370.42 65.39 0.00 0.00 15248.44 1481.68 21883.33 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3130998 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:54.514 { 00:30:54.514 "params": { 00:30:54.514 "name": "Nvme$subsystem", 00:30:54.514 "trtype": "$TEST_TRANSPORT", 00:30:54.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.514 "adrfam": "ipv4", 00:30:54.514 "trsvcid": "$NVMF_PORT", 00:30:54.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.514 "hdgst": ${hdgst:-false}, 00:30:54.514 "ddgst": ${ddgst:-false} 00:30:54.514 }, 00:30:54.514 "method": "bdev_nvme_attach_controller" 00:30:54.514 } 00:30:54.514 EOF 00:30:54.514 )") 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:54.514 [2024-11-20 10:03:17.720576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.720611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:54.514 10:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:54.514 "params": { 00:30:54.514 "name": "Nvme1", 00:30:54.514 "trtype": "tcp", 00:30:54.514 "traddr": "10.0.0.2", 00:30:54.514 "adrfam": "ipv4", 00:30:54.514 "trsvcid": "4420", 00:30:54.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.514 "hdgst": false, 00:30:54.514 "ddgst": false 00:30:54.514 }, 00:30:54.514 "method": "bdev_nvme_attach_controller" 00:30:54.514 }' 00:30:54.514 [2024-11-20 10:03:17.732535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.732548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 10:03:17.744526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.744536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 10:03:17.756527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.756537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 10:03:17.762867] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:30:54.514 [2024-11-20 10:03:17.762908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130998 ] 00:30:54.514 [2024-11-20 10:03:17.768530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.768541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 10:03:17.780526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.780536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 10:03:17.792529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.792539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 10:03:17.804528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.804537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 10:03:17.816531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.816543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 10:03:17.828527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.828536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 10:03:17.838167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.514 [2024-11-20 10:03:17.840526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 10:03:17.840535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.852527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.852542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.864527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.864536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.876524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.876534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.880330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.774 [2024-11-20 10:03:17.888525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.888537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.900534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.900554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.912533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.912546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.924545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.924567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.936534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.936547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.948528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.948540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.960525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.960534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.972546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.972565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.984531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.984545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:17.996531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:17.996545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:18.008531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:18.008546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:18.059189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:18.059208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:18.068528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:18.068540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 Running I/O for 5 seconds... 00:30:54.774 [2024-11-20 10:03:18.082760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:18.082779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.774 [2024-11-20 10:03:18.098211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.774 [2024-11-20 10:03:18.098230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.113584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.113603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.128398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.128417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.142703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.142722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.158183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.158209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.173134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.173153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.189243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.189262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.205327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.205348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.220529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.220549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.234401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.234420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.250020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.250039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.265030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.265049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.280487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.280523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.291676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.291695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.307009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.307029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.321985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.322004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.337067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.337087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.033 [2024-11-20 10:03:18.352379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.033 [2024-11-20 10:03:18.352399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.365198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.365218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.380658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.380677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.394570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.394590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.409739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.409758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.424570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.424590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.438104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.438123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.453153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.453172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.468311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.468330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.479886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.479905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.494274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.494294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.509720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.509739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.524538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.524562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.536069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.536088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.550401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.550420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.565567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.565585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.580490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.580509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.591803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.591822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 10:03:18.606774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 10:03:18.606793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.622160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.622181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.637066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.637085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.652071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.652091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.665877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.665896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.677334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.677353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.690052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.690071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.704985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.705004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.720247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.720266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.731909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.731928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.746313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.746332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.761406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.761431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.776529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.776549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.789592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.789615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.804906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.804924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.817366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.817384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.830189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.830217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.845468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.845487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.860569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.860587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 10:03:18.871841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 10:03:18.871859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:18.886377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:18.886395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:18.901645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:18.901663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:18.912328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:18.912346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:18.926699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:18.926718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:18.941761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:18.941779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:18.956259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:18.956277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:18.967454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:18.967479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:18.982611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:18.982630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:18.997610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:18.997628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:19.012867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:19.012884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:19.028771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:19.028790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:19.040846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:19.040864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:19.054561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:19.054584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:19.070042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:19.070065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 16317.00 IOPS, 127.48 MiB/s [2024-11-20T09:03:19.142Z] [2024-11-20 10:03:19.085101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:19.085119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:19.100715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:19.100733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:19.114024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:19.114042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.810 [2024-11-20 10:03:19.128830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.810 [2024-11-20 10:03:19.128847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.142639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.142659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.157907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.157926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.173031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.173050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.188848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.188867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.204242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.204261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.218881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.218899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.234114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.234133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.249561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.249580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.264518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.264538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.275584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.275602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.290738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.290757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.305606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.305624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.320941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.320965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.332568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.332587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.346550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.346568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.361967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.361985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.377184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.377202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.069 [2024-11-20 10:03:19.389165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.069 [2024-11-20 10:03:19.389182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.402584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.402604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.417469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.417487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.432315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.432333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.444014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.444032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.458766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.458784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.473735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.473752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.488735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.488753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.500802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.500819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.514591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.514609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.529772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.529793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.540416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.540435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.554704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.554723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.569676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.569694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.584496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.584516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.598677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.598696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.613831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.613850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.629111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.629131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.645142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.645161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.328 [2024-11-20 10:03:19.657174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.328 [2024-11-20 10:03:19.657193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.587 [2024-11-20 10:03:19.672843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.587 [2024-11-20 10:03:19.672863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.587 [2024-11-20 10:03:19.688854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.587 [2024-11-20 10:03:19.688874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.587 [2024-11-20 10:03:19.704631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.704652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.716720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.716740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.730560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.730580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.745961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.745997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.760872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.760893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.776591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.776611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.788062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.788082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.802745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.802765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.818300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.818320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.833262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.833282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.849166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.849186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.864532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.864551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.876044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.876064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.890640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.890661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.588 [2024-11-20 10:03:19.905344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.588 [2024-11-20 10:03:19.905364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:19.920687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:19.920709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:19.932352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:19.932372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:19.947185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:19.947216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:19.962244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:19.962265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:19.977186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:19.977206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:19.993286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:19.993307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.008816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.008836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.022070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.022091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.037394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.037415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.052536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.052558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.066604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.066624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 16267.00 IOPS, 127.09 MiB/s [2024-11-20T09:03:20.179Z] [2024-11-20 10:03:20.081884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.081903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.096953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.096973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.112084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.112105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.125510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.125531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.140779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.140804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.152891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.152910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.847 [2024-11-20 10:03:20.166493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.847 [2024-11-20 10:03:20.166516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.181623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.181643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.196264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.196283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.209543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.209562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.222257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.222277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.237914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.237933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.252769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.252789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.264509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.264529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.278364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.278384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.293558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.293577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.308473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.308493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.320812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.320831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.334412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.334432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.349570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.349589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.364540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.364560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.378641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.378660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.394137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.394156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.409641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.409672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.424489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.424510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.106 [2024-11-20 10:03:20.436023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.106 [2024-11-20 10:03:20.436043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.451164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.451185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.466074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.466093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.480891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.480910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.496011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.496031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.510602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.510621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.525682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.525702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.540032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.540052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.553656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.553677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.569014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.569033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.585051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.585070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.600412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.600431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.613458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.613477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.628782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.628802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.641240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.641260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.656610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.656629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.670429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.670448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.365 [2024-11-20 10:03:20.685684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.365 [2024-11-20 10:03:20.685708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.624 [2024-11-20 10:03:20.696112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.624 [2024-11-20 10:03:20.696132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.624 [2024-11-20 10:03:20.710424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.624 [2024-11-20 10:03:20.710443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.624 [2024-11-20 10:03:20.725235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.624 [2024-11-20 10:03:20.725254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.740832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.740850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.754145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.754164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.769134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.769154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.784806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.784825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.798637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.798656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.813452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.813471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.828539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.828558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.842274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.842292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.857305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.857324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.872254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.872273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.886142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.886161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.901328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.901346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.911993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.912012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.926807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.926827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.941852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.941872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.625 [2024-11-20 10:03:20.952293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.625 [2024-11-20 10:03:20.952321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:20.966944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:20.966972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:20.982121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:20.982140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:20.996938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:20.996963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.012053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.012072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.026398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.026417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.041749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.041768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.056799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.056818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.067282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.067301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 16332.67 IOPS, 127.60 MiB/s [2024-11-20T09:03:21.216Z] [2024-11-20 10:03:21.081903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.081923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.097128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.097147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.113130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.113150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.124478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.124498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.138108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.138128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.153450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.153469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.168712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.168733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.180424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.180444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.194991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.195012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.884 [2024-11-20 10:03:21.210393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.884 [2024-11-20 10:03:21.210414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.143 [2024-11-20 10:03:21.225512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.143 [2024-11-20 10:03:21.225533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.143 [2024-11-20 10:03:21.240990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.143 [2024-11-20 10:03:21.241009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.143 [2024-11-20 10:03:21.256840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.143 [2024-11-20 10:03:21.256859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.143 [2024-11-20 10:03:21.269960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.143 [2024-11-20 10:03:21.269995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.143 [2024-11-20 10:03:21.284897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.143 [2024-11-20 10:03:21.284916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.143 [2024-11-20 10:03:21.300879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.143 [2024-11-20 10:03:21.300900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.143 [2024-11-20 10:03:21.316263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.143 [2024-11-20 10:03:21.316283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.143 [2024-11-20 10:03:21.329587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.143 [2024-11-20 10:03:21.329605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.143 [2024-11-20 10:03:21.340268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.143 [2024-11-20 10:03:21.340288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.144 [2024-11-20 10:03:21.354481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.144 [2024-11-20 10:03:21.354500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.144 [2024-11-20 10:03:21.369879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.144 [2024-11-20 10:03:21.369899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.144 [2024-11-20 10:03:21.385032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.144 [2024-11-20 10:03:21.385052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.144 [2024-11-20 10:03:21.401181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.144 [2024-11-20 10:03:21.401201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.144 [2024-11-20 10:03:21.416585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.144 [2024-11-20 10:03:21.416604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.144 [2024-11-20 10:03:21.430620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.144 [2024-11-20 10:03:21.430641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.144 [2024-11-20 10:03:21.445508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.144 [2024-11-20 10:03:21.445528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.144 [2024-11-20 10:03:21.460402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.144 [2024-11-20 10:03:21.460422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.144 [2024-11-20 10:03:21.473046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.144 [2024-11-20 10:03:21.473065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.489079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.489099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.504934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.504959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.516585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.516605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.530478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.530498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.545639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.545659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.560669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.560689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.571614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.571634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.586133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.586153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.601601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.601621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.616585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.616604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.629158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.629178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.644323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.644343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.655920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.655939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.670659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.670677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.686335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.686354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.701621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.701639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.717025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.717044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.403 [2024-11-20 10:03:21.732320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.403 [2024-11-20 10:03:21.732340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.745494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.745513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.761405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.761429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.777338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.777359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.792414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.792433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.805536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.805555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.817096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.817115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.830479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.830498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.845577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.845596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.860959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.860978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.877017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.877036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.889434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.889453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.902314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.902333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.917705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.917723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.932972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.932991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.948000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.948019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.961409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.961428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.976541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.976560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.663 [2024-11-20 10:03:21.988886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.663 [2024-11-20 10:03:21.988906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.002676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.002696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.017883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.017902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.032706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.032730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.043597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.043616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.058452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.058471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.073720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.073739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 16334.00 IOPS, 127.61 MiB/s [2024-11-20T09:03:22.255Z] [2024-11-20 10:03:22.088683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.088703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.102347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.102367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.117291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.117310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.132393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.132413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.146438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.146457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.161780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.161800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.176999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.177018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.188595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.923 [2024-11-20 10:03:22.188614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.923 [2024-11-20 10:03:22.202538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.924 [2024-11-20 10:03:22.202558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.924 [2024-11-20 10:03:22.218068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.924 [2024-11-20 10:03:22.218087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.924 [2024-11-20 10:03:22.233150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.924 [2024-11-20 10:03:22.233169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.924 [2024-11-20 10:03:22.248616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.924 [2024-11-20 10:03:22.248636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.261914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.261935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.277273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.277292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.293241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.293259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.308641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.308670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.320377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.320397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.334409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.334428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.349846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.349867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.365087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.365107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.380994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.381014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.396526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.396545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.407555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.407574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.422813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.422833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.437712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.437731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.452895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.452914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.468201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.468220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.483192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.483211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.183 [2024-11-20 10:03:22.498193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.183 [2024-11-20 10:03:22.498224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.513558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.513580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.528744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.528764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.541792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.541811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.552605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.552624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.566673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.566693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.581906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.581926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.596862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.596881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.613139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.613158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.628304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.628325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.641212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.641232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.656988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.657009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.672866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.672887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.684836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.684856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.697974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.697994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.713040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.713059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.729333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.729354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.745342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.745362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.442 [2024-11-20 10:03:22.760853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.442 [2024-11-20 10:03:22.760872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.776185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.776206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.787176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.787196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.802395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.802414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.817959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.817978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.828235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.828256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.842660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.842680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.858071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.858092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.873307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.873326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.888693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.888714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.902241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.902261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.917647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.917667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.932906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.932925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.948416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.948436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.962489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.962509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.977938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.977968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:22.993283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:22.993303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:23.008292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:23.008311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.701 [2024-11-20 10:03:23.019617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.701 [2024-11-20 10:03:23.019636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.034532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.034554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.049565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.049585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.060139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.060158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.074472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.074491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 16323.60 IOPS, 127.53 MiB/s [2024-11-20T09:03:23.295Z] [2024-11-20 10:03:23.089279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.089298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 00:30:59.963 Latency(us) 00:30:59.963 [2024-11-20T09:03:23.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.963 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:59.963 Nvme1n1 : 5.01 16327.39 127.56 0.00 0.00 7831.84 2065.81 13620.09 00:30:59.963 [2024-11-20T09:03:23.295Z] =================================================================================================================== 00:30:59.963 [2024-11-20T09:03:23.295Z] Total : 16327.39 127.56 0.00 0.00 7831.84 2065.81 13620.09 00:30:59.963 [2024-11-20 10:03:23.100531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.100549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.112530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.112544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.124544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.124562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.136534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.136550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.148536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.148548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.160530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.160543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.172530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.172544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.184530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.184542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.196530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.196543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.208524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.208534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.220537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.220550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.232526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.232536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 10:03:23.244527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 10:03:23.244537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3130998) - No such process 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3130998 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.963 delay0 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.963 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.964 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.964 10:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:00.222 [2024-11-20 10:03:23.390639] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:06.791 Initializing NVMe Controllers 00:31:06.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:06.791 Initialization complete. Launching workers. 00:31:06.791 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 299, failed: 7314 00:31:06.791 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7547, failed to submit 66 00:31:06.791 success 7438, unsuccessful 109, failed 0 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.791 rmmod nvme_tcp 00:31:06.791 rmmod nvme_fabrics 00:31:06.791 rmmod nvme_keyring 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.791 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3128746 ']' 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3128746 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3128746 ']' 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3128746 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128746 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128746' 00:31:06.792 killing process with pid 3128746 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3128746 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3128746 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.792 10:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.698 10:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.698 00:31:08.698 real 0m31.885s 00:31:08.698 user 0m40.594s 00:31:08.698 sys 0m12.235s 00:31:08.698 10:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.698 10:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:08.698 ************************************ 00:31:08.698 END TEST nvmf_zcopy 00:31:08.698 ************************************ 00:31:08.698 10:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:08.698 10:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:08.698 10:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.698 10:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.698 ************************************ 00:31:08.698 START TEST nvmf_nmic 00:31:08.698 ************************************ 00:31:08.698 10:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:08.698 * Looking for test storage... 00:31:08.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1703 -- # lcov --version 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:31:08.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.959 --rc genhtml_branch_coverage=1 00:31:08.959 --rc genhtml_function_coverage=1 00:31:08.959 --rc genhtml_legend=1 00:31:08.959 --rc geninfo_all_blocks=1 00:31:08.959 --rc geninfo_unexecuted_blocks=1 00:31:08.959 00:31:08.959 ' 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:31:08.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.959 --rc genhtml_branch_coverage=1 00:31:08.959 --rc genhtml_function_coverage=1 00:31:08.959 --rc genhtml_legend=1 00:31:08.959 --rc geninfo_all_blocks=1 00:31:08.959 --rc geninfo_unexecuted_blocks=1 00:31:08.959 00:31:08.959 ' 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:31:08.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.959 --rc genhtml_branch_coverage=1 00:31:08.959 --rc genhtml_function_coverage=1 00:31:08.959 --rc genhtml_legend=1 00:31:08.959 --rc geninfo_all_blocks=1 00:31:08.959 --rc geninfo_unexecuted_blocks=1 00:31:08.959 00:31:08.959 ' 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:31:08.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.959 --rc genhtml_branch_coverage=1 00:31:08.959 --rc genhtml_function_coverage=1 00:31:08.959 --rc genhtml_legend=1 00:31:08.959 --rc geninfo_all_blocks=1 00:31:08.959 --rc geninfo_unexecuted_blocks=1 00:31:08.959 00:31:08.959 ' 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.959 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.960 10:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:15.530 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:15.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:15.530 Found net devices under 0000:86:00.0: cvl_0_0 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:15.530 Found net devices under 0000:86:00.1: cvl_0_1 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.530 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.531 10:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:31:15.531 00:31:15.531 --- 10.0.0.2 ping statistics --- 00:31:15.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.531 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:31:15.531 00:31:15.531 --- 10.0.0.1 ping statistics --- 00:31:15.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.531 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3136354 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3136354 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3136354 ']' 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.531 [2024-11-20 10:03:38.122980] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:15.531 [2024-11-20 10:03:38.123931] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:31:15.531 [2024-11-20 10:03:38.123975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.531 [2024-11-20 10:03:38.203445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.531 [2024-11-20 10:03:38.246331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.531 [2024-11-20 10:03:38.246371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.531 [2024-11-20 10:03:38.246379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.531 [2024-11-20 10:03:38.246385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.531 [2024-11-20 10:03:38.246390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.531 [2024-11-20 10:03:38.247942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.531 [2024-11-20 10:03:38.248051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.531 [2024-11-20 10:03:38.248159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.531 [2024-11-20 10:03:38.248160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.531 [2024-11-20 10:03:38.315442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:15.531 [2024-11-20 10:03:38.316205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:15.531 [2024-11-20 10:03:38.316460] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:15.531 [2024-11-20 10:03:38.316744] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:15.531 [2024-11-20 10:03:38.316801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.531 [2024-11-20 10:03:38.396846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.531 Malloc0 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.531 [2024-11-20 10:03:38.472909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.531 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:15.531 test case1: single bdev can't be used in multiple subsystems 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.532 [2024-11-20 10:03:38.508538] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:15.532 [2024-11-20 10:03:38.508559] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:15.532 [2024-11-20 10:03:38.508566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:15.532 request: 00:31:15.532 { 00:31:15.532 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:15.532 "namespace": { 00:31:15.532 "bdev_name": "Malloc0", 00:31:15.532 "no_auto_visible": false 00:31:15.532 }, 00:31:15.532 "method": "nvmf_subsystem_add_ns", 00:31:15.532 "req_id": 1 00:31:15.532 } 00:31:15.532 Got JSON-RPC error response 00:31:15.532 response: 00:31:15.532 { 00:31:15.532 "code": -32602, 00:31:15.532 "message": "Invalid parameters" 00:31:15.532 } 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:15.532 Adding namespace failed - expected result. 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:15.532 test case2: host connect to nvmf target in multiple paths 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:15.532 [2024-11-20 10:03:38.520632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:15.532 10:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:15.790 10:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:15.790 10:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:15.790 10:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:15.790 10:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:15.790 10:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:17.692 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:17.974 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:17.974 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:17.974 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:17.974 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:17.974 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:17.974 10:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:17.974 [global] 00:31:17.974 thread=1 00:31:17.974 invalidate=1 00:31:17.974 rw=write 00:31:17.974 time_based=1 00:31:17.974 runtime=1 00:31:17.974 ioengine=libaio 00:31:17.974 direct=1 00:31:17.974 bs=4096 00:31:17.974 iodepth=1 00:31:17.974 norandommap=0 00:31:17.974 numjobs=1 00:31:17.974 00:31:17.974 verify_dump=1 00:31:17.974 verify_backlog=512 00:31:17.974 verify_state_save=0 00:31:17.974 do_verify=1 00:31:17.974 verify=crc32c-intel 00:31:17.974 [job0] 00:31:17.974 filename=/dev/nvme0n1 00:31:17.974 Could not set queue depth (nvme0n1) 00:31:18.236 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:18.236 fio-3.35 00:31:18.236 Starting 1 thread 00:31:19.168 00:31:19.169 job0: (groupid=0, jobs=1): err= 0: pid=3136971: Wed Nov 20 10:03:42 2024 00:31:19.169 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:31:19.169 slat (nsec): min=11277, max=22655, avg=21166.04, stdev=2206.55 00:31:19.169 clat (usec): min=40879, max=41876, avg=41016.07, stdev=197.63 00:31:19.169 lat (usec): min=40901, max=41898, avg=41037.24, stdev=197.56 00:31:19.169 clat percentiles (usec): 00:31:19.169 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:19.169 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:19.169 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:19.169 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:19.169 | 99.99th=[41681] 00:31:19.169 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:31:19.169 slat (nsec): min=10017, max=34320, avg=13108.52, stdev=1547.55 00:31:19.169 clat (usec): min=136, max=280, avg=146.77, stdev= 7.35 00:31:19.169 lat (usec): min=148, max=315, avg=159.88, stdev= 8.27 00:31:19.169 clat percentiles (usec): 00:31:19.169 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 143], 00:31:19.169 | 30.00th=[ 145], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 147], 00:31:19.169 | 70.00th=[ 149], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 155], 00:31:19.169 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 281], 99.95th=[ 281], 00:31:19.169 | 99.99th=[ 281] 00:31:19.169 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:19.169 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:19.169 lat (usec) : 250=95.51%, 500=0.19% 00:31:19.169 lat (msec) : 50=4.30% 00:31:19.169 cpu : usr=0.49%, sys=0.88%, ctx=535, majf=0, minf=1 00:31:19.169 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.169 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.169 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:19.169 00:31:19.169 Run status group 0 (all jobs): 00:31:19.169 READ: bw=89.5KiB/s (91.6kB/s), 89.5KiB/s-89.5KiB/s (91.6kB/s-91.6kB/s), io=92.0KiB (94.2kB), run=1028-1028msec 00:31:19.169 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:31:19.169 00:31:19.169 Disk stats (read/write): 00:31:19.169 nvme0n1: ios=69/512, merge=0/0, ticks=799/71, in_queue=870, util=91.18% 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:19.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:19.426 rmmod nvme_tcp 00:31:19.426 rmmod nvme_fabrics 00:31:19.426 rmmod nvme_keyring 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3136354 ']' 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3136354 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3136354 ']' 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3136354 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:19.426 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3136354 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3136354' 00:31:19.685 killing process with pid 3136354 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3136354 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3136354 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.685 10:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:22.225 00:31:22.225 real 0m13.112s 00:31:22.225 user 0m24.246s 00:31:22.225 sys 0m6.051s 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:22.225 ************************************ 00:31:22.225 END TEST nvmf_nmic 00:31:22.225 ************************************ 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:22.225 ************************************ 00:31:22.225 START TEST nvmf_fio_target 00:31:22.225 ************************************ 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:22.225 * Looking for test storage... 00:31:22.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1703 -- # lcov --version 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:22.225 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:31:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.226 --rc genhtml_branch_coverage=1 00:31:22.226 --rc genhtml_function_coverage=1 00:31:22.226 --rc genhtml_legend=1 00:31:22.226 --rc geninfo_all_blocks=1 00:31:22.226 --rc geninfo_unexecuted_blocks=1 00:31:22.226 00:31:22.226 ' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:31:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.226 --rc genhtml_branch_coverage=1 00:31:22.226 --rc genhtml_function_coverage=1 00:31:22.226 --rc genhtml_legend=1 00:31:22.226 --rc geninfo_all_blocks=1 00:31:22.226 --rc geninfo_unexecuted_blocks=1 00:31:22.226 00:31:22.226 ' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:31:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.226 --rc genhtml_branch_coverage=1 00:31:22.226 --rc genhtml_function_coverage=1 00:31:22.226 --rc genhtml_legend=1 00:31:22.226 --rc geninfo_all_blocks=1 00:31:22.226 --rc geninfo_unexecuted_blocks=1 00:31:22.226 00:31:22.226 ' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:31:22.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.226 --rc genhtml_branch_coverage=1 00:31:22.226 --rc genhtml_function_coverage=1 00:31:22.226 --rc genhtml_legend=1 00:31:22.226 --rc geninfo_all_blocks=1 00:31:22.226 --rc geninfo_unexecuted_blocks=1 00:31:22.226 00:31:22.226 ' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:22.226 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.227 10:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:28.797 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.797 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:28.798 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:28.798 Found net devices under 0000:86:00.0: cvl_0_0 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:28.798 Found net devices under 0000:86:00.1: cvl_0_1 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.798 10:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:31:28.798 00:31:28.798 --- 10.0.0.2 ping statistics --- 00:31:28.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.798 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:31:28.798 00:31:28.798 --- 10.0.0.1 ping statistics --- 00:31:28.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.798 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:28.798 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3140719 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3140719 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3140719 ']' 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.799 [2024-11-20 10:03:51.280288] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.799 [2024-11-20 10:03:51.281302] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:31:28.799 [2024-11-20 10:03:51.281341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.799 [2024-11-20 10:03:51.362146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:28.799 [2024-11-20 10:03:51.404830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.799 [2024-11-20 10:03:51.404868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.799 [2024-11-20 10:03:51.404875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.799 [2024-11-20 10:03:51.404881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.799 [2024-11-20 10:03:51.404886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.799 [2024-11-20 10:03:51.406393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.799 [2024-11-20 10:03:51.406504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:28.799 [2024-11-20 10:03:51.406532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.799 [2024-11-20 10:03:51.406533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:28.799 [2024-11-20 10:03:51.475145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.799 [2024-11-20 10:03:51.475930] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.799 [2024-11-20 10:03:51.476163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:28.799 [2024-11-20 10:03:51.476539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:28.799 [2024-11-20 10:03:51.476566] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:28.799 [2024-11-20 10:03:51.719451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:28.799 10:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:29.059 10:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:29.059 10:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:29.318 10:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:29.318 10:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:29.318 10:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:29.318 10:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:29.577 10:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:29.836 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:29.836 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:30.095 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:30.095 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:30.095 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:30.095 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:30.354 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:30.613 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:30.613 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:30.871 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:30.871 10:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:30.871 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.128 [2024-11-20 10:03:54.343301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.128 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:31.384 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:31.641 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:31.641 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:31.898 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:31.898 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:31.898 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:31.898 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:31.898 10:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:33.795 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:33.795 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:33.795 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:33.795 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:33.795 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:33.795 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:33.795 10:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:33.795 [global] 00:31:33.795 thread=1 00:31:33.795 invalidate=1 00:31:33.795 rw=write 00:31:33.795 time_based=1 00:31:33.795 runtime=1 00:31:33.795 ioengine=libaio 00:31:33.795 direct=1 00:31:33.795 bs=4096 00:31:33.795 iodepth=1 00:31:33.795 norandommap=0 00:31:33.795 numjobs=1 00:31:33.795 00:31:33.795 verify_dump=1 00:31:33.795 verify_backlog=512 00:31:33.795 verify_state_save=0 00:31:33.795 do_verify=1 00:31:33.795 verify=crc32c-intel 00:31:33.795 [job0] 00:31:33.795 filename=/dev/nvme0n1 00:31:33.795 [job1] 00:31:33.795 filename=/dev/nvme0n2 00:31:33.795 [job2] 00:31:33.795 filename=/dev/nvme0n3 00:31:33.795 [job3] 00:31:33.795 filename=/dev/nvme0n4 00:31:33.795 Could not set queue depth (nvme0n1) 00:31:33.795 Could not set queue depth (nvme0n2) 00:31:33.795 Could not set queue depth (nvme0n3) 00:31:33.795 Could not set queue depth (nvme0n4) 00:31:34.052 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:34.052 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:34.052 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:34.052 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:34.052 fio-3.35 00:31:34.052 Starting 4 threads 00:31:35.424 00:31:35.424 job0: (groupid=0, jobs=1): err= 0: pid=3141839: Wed Nov 20 10:03:58 2024 00:31:35.424 read: IOPS=2429, BW=9718KiB/s (9952kB/s)(9728KiB/1001msec) 00:31:35.424 slat (nsec): min=3777, max=31916, avg=5591.02, stdev=1749.80 00:31:35.424 clat (usec): min=167, max=468, avg=236.25, stdev=28.35 00:31:35.424 lat (usec): min=172, max=500, avg=241.84, stdev=28.42 00:31:35.424 clat percentiles (usec): 00:31:35.424 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 212], 00:31:35.424 | 30.00th=[ 225], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:31:35.424 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 269], 00:31:35.424 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 383], 99.95th=[ 396], 00:31:35.424 | 99.99th=[ 469] 00:31:35.424 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:35.424 slat (nsec): min=4739, max=32489, avg=7465.76, stdev=2069.07 00:31:35.424 clat (usec): min=115, max=309, avg=148.73, stdev=15.85 00:31:35.424 lat (usec): min=121, max=341, avg=156.19, stdev=16.23 00:31:35.424 clat percentiles (usec): 00:31:35.424 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 137], 00:31:35.424 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:31:35.424 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 174], 00:31:35.424 | 99.00th=[ 204], 99.50th=[ 217], 99.90th=[ 249], 99.95th=[ 306], 00:31:35.424 | 99.99th=[ 310] 00:31:35.424 bw ( KiB/s): min=12263, max=12263, per=55.52%, avg=12263.00, stdev= 0.00, samples=1 00:31:35.424 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:31:35.424 lat (usec) : 250=82.79%, 500=17.21% 00:31:35.424 cpu : usr=1.60%, sys=3.30%, ctx=4995, majf=0, minf=1 00:31:35.424 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.424 issued rwts: total=2432,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.424 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:35.424 job1: (groupid=0, jobs=1): err= 0: pid=3141840: Wed Nov 20 10:03:58 2024 00:31:35.424 read: IOPS=321, BW=1286KiB/s (1317kB/s)(1312KiB/1020msec) 00:31:35.424 slat (nsec): min=7398, max=37458, avg=10576.96, stdev=5094.48 00:31:35.424 clat (usec): min=202, max=41210, avg=2772.06, stdev=9751.10 00:31:35.424 lat (usec): min=210, max=41219, avg=2782.64, stdev=9753.42 00:31:35.424 clat percentiles (usec): 00:31:35.424 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 237], 20.00th=[ 253], 00:31:35.424 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:31:35.424 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 482], 95.00th=[41157], 00:31:35.424 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:35.424 | 99.99th=[41157] 00:31:35.424 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:31:35.424 slat (nsec): min=9241, max=34314, avg=12829.43, stdev=3771.08 00:31:35.424 clat (usec): min=140, max=358, avg=189.65, stdev=21.49 00:31:35.424 lat (usec): min=156, max=392, avg=202.48, stdev=22.03 00:31:35.424 clat percentiles (usec): 00:31:35.425 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:31:35.425 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 192], 00:31:35.425 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 229], 00:31:35.425 | 99.00th=[ 258], 99.50th=[ 281], 99.90th=[ 359], 99.95th=[ 359], 00:31:35.425 | 99.99th=[ 359] 00:31:35.425 bw ( KiB/s): min= 4087, max= 4087, per=18.50%, avg=4087.00, stdev= 0.00, samples=1 00:31:35.425 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:35.425 lat (usec) : 250=66.79%, 500=30.36%, 750=0.48% 00:31:35.425 lat (msec) : 50=2.38% 00:31:35.425 cpu : usr=0.39%, sys=0.98%, ctx=840, majf=0, minf=1 00:31:35.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.425 issued rwts: total=328,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:35.425 job2: (groupid=0, jobs=1): err= 0: pid=3141842: Wed Nov 20 10:03:58 2024 00:31:35.425 read: IOPS=775, BW=3100KiB/s (3175kB/s)(3156KiB/1018msec) 00:31:35.425 slat (nsec): min=7291, max=43892, avg=10322.71, stdev=4813.29 00:31:35.425 clat (usec): min=197, max=41449, avg=1004.55, stdev=5382.40 00:31:35.425 lat (usec): min=215, max=41458, avg=1014.87, stdev=5383.20 00:31:35.425 clat percentiles (usec): 00:31:35.425 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:31:35.425 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:31:35.425 | 70.00th=[ 281], 80.00th=[ 322], 90.00th=[ 375], 95.00th=[ 469], 00:31:35.425 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:31:35.425 | 99.99th=[41681] 00:31:35.425 write: IOPS=1005, BW=4024KiB/s (4120kB/s)(4096KiB/1018msec); 0 zone resets 00:31:35.425 slat (nsec): min=10111, max=37484, avg=12314.15, stdev=2681.35 00:31:35.425 clat (usec): min=143, max=382, avg=192.79, stdev=25.48 00:31:35.425 lat (usec): min=155, max=400, avg=205.10, stdev=25.32 00:31:35.425 clat percentiles (usec): 00:31:35.425 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 172], 00:31:35.425 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:31:35.425 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 229], 95.00th=[ 241], 00:31:35.425 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 383], 00:31:35.425 | 99.99th=[ 383] 00:31:35.425 bw ( KiB/s): min= 8175, max= 8175, per=37.01%, avg=8175.00, stdev= 0.00, samples=1 00:31:35.425 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:31:35.425 lat (usec) : 250=70.10%, 500=28.68%, 750=0.44% 00:31:35.425 lat (msec) : 50=0.77% 00:31:35.425 cpu : usr=2.16%, sys=2.46%, ctx=1813, majf=0, minf=1 00:31:35.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.425 issued rwts: total=789,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:35.425 job3: (groupid=0, jobs=1): err= 0: pid=3141843: Wed Nov 20 10:03:58 2024 00:31:35.425 read: IOPS=1479, BW=5918KiB/s (6060kB/s)(5924KiB/1001msec) 00:31:35.425 slat (nsec): min=7236, max=37351, avg=8520.03, stdev=1540.74 00:31:35.425 clat (usec): min=177, max=41427, avg=465.45, stdev=2796.70 00:31:35.425 lat (usec): min=185, max=41435, avg=473.97, stdev=2797.12 00:31:35.425 clat percentiles (usec): 00:31:35.425 | 1.00th=[ 188], 5.00th=[ 208], 10.00th=[ 233], 20.00th=[ 241], 00:31:35.425 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 262], 00:31:35.425 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 367], 95.00th=[ 424], 00:31:35.425 | 99.00th=[ 506], 99.50th=[ 578], 99.90th=[41157], 99.95th=[41681], 00:31:35.425 | 99.99th=[41681] 00:31:35.425 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:31:35.425 slat (nsec): min=10253, max=41371, avg=11530.68, stdev=1745.96 00:31:35.425 clat (usec): min=131, max=1492, avg=176.59, stdev=41.33 00:31:35.425 lat (usec): min=142, max=1510, avg=188.12, stdev=41.64 00:31:35.425 clat percentiles (usec): 00:31:35.425 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 159], 00:31:35.425 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:31:35.425 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 217], 00:31:35.425 | 99.00th=[ 249], 99.50th=[ 273], 99.90th=[ 363], 99.95th=[ 1500], 00:31:35.425 | 99.99th=[ 1500] 00:31:35.425 bw ( KiB/s): min= 4087, max= 4087, per=18.50%, avg=4087.00, stdev= 0.00, samples=1 00:31:35.425 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:31:35.425 lat (usec) : 250=73.15%, 500=26.32%, 750=0.27% 00:31:35.425 lat (msec) : 2=0.03%, 50=0.23% 00:31:35.425 cpu : usr=1.80%, sys=5.70%, ctx=3018, majf=0, minf=1 00:31:35.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.425 issued rwts: total=1481,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:35.425 00:31:35.425 Run status group 0 (all jobs): 00:31:35.425 READ: bw=19.3MiB/s (20.2MB/s), 1286KiB/s-9718KiB/s (1317kB/s-9952kB/s), io=19.6MiB (20.6MB), run=1001-1020msec 00:31:35.425 WRITE: bw=21.6MiB/s (22.6MB/s), 2008KiB/s-9.99MiB/s (2056kB/s-10.5MB/s), io=22.0MiB (23.1MB), run=1001-1020msec 00:31:35.425 00:31:35.425 Disk stats (read/write): 00:31:35.425 nvme0n1: ios=2006/2048, merge=0/0, ticks=1504/303, in_queue=1807, util=96.79% 00:31:35.425 nvme0n2: ios=348/512, merge=0/0, ticks=718/97, in_queue=815, util=85.89% 00:31:35.425 nvme0n3: ios=789/1024, merge=0/0, ticks=783/184, in_queue=967, util=89.77% 00:31:35.425 nvme0n4: ios=1024/1113, merge=0/0, ticks=575/190, in_queue=765, util=89.04% 00:31:35.425 10:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:35.425 [global] 00:31:35.425 thread=1 00:31:35.425 invalidate=1 00:31:35.425 rw=randwrite 00:31:35.425 time_based=1 00:31:35.425 runtime=1 00:31:35.425 ioengine=libaio 00:31:35.425 direct=1 00:31:35.425 bs=4096 00:31:35.425 iodepth=1 00:31:35.425 norandommap=0 00:31:35.425 numjobs=1 00:31:35.425 00:31:35.425 verify_dump=1 00:31:35.425 verify_backlog=512 00:31:35.425 verify_state_save=0 00:31:35.425 do_verify=1 00:31:35.425 verify=crc32c-intel 00:31:35.425 [job0] 00:31:35.425 filename=/dev/nvme0n1 00:31:35.425 [job1] 00:31:35.425 filename=/dev/nvme0n2 00:31:35.425 [job2] 00:31:35.425 filename=/dev/nvme0n3 00:31:35.425 [job3] 00:31:35.425 filename=/dev/nvme0n4 00:31:35.425 Could not set queue depth (nvme0n1) 00:31:35.425 Could not set queue depth (nvme0n2) 00:31:35.425 Could not set queue depth (nvme0n3) 00:31:35.425 Could not set queue depth (nvme0n4) 00:31:35.711 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.711 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.711 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.711 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.711 fio-3.35 00:31:35.711 Starting 4 threads 00:31:37.082 00:31:37.082 job0: (groupid=0, jobs=1): err= 0: pid=3142217: Wed Nov 20 10:04:00 2024 00:31:37.082 read: IOPS=2483, BW=9932KiB/s (10.2MB/s)(10.0MiB/1031msec) 00:31:37.082 slat (nsec): min=6373, max=39993, avg=8220.86, stdev=1549.58 00:31:37.082 clat (usec): min=169, max=41139, avg=225.35, stdev=1144.87 00:31:37.082 lat (usec): min=177, max=41145, avg=233.57, stdev=1145.07 00:31:37.082 clat percentiles (usec): 00:31:37.082 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:31:37.082 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:31:37.082 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 217], 00:31:37.083 | 99.00th=[ 251], 99.50th=[ 293], 99.90th=[ 3884], 99.95th=[41157], 00:31:37.083 | 99.99th=[41157] 00:31:37.083 write: IOPS=2483, BW=9932KiB/s (10.2MB/s)(10.0MiB/1031msec); 0 zone resets 00:31:37.083 slat (nsec): min=10363, max=41044, avg=11572.85, stdev=1732.12 00:31:37.083 clat (usec): min=116, max=337, avg=151.50, stdev=21.20 00:31:37.083 lat (usec): min=138, max=373, avg=163.07, stdev=21.49 00:31:37.083 clat percentiles (usec): 00:31:37.083 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 133], 20.00th=[ 135], 00:31:37.083 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 149], 00:31:37.083 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 184], 00:31:37.083 | 99.00th=[ 200], 99.50th=[ 243], 99.90th=[ 249], 99.95th=[ 251], 00:31:37.083 | 99.99th=[ 338] 00:31:37.083 bw ( KiB/s): min= 8192, max=12288, per=64.50%, avg=10240.00, stdev=2896.31, samples=2 00:31:37.083 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:31:37.083 lat (usec) : 250=99.45%, 500=0.47% 00:31:37.083 lat (msec) : 2=0.02%, 4=0.02%, 50=0.04% 00:31:37.083 cpu : usr=4.08%, sys=7.86%, ctx=5121, majf=0, minf=1 00:31:37.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.083 issued rwts: total=2560,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.083 job1: (groupid=0, jobs=1): err= 0: pid=3142218: Wed Nov 20 10:04:00 2024 00:31:37.083 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:31:37.083 slat (nsec): min=8759, max=24462, avg=21855.23, stdev=3138.11 00:31:37.083 clat (usec): min=40846, max=41292, avg=40986.18, stdev=91.41 00:31:37.083 lat (usec): min=40867, max=41300, avg=41008.04, stdev=89.16 00:31:37.083 clat percentiles (usec): 00:31:37.083 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:37.083 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:37.083 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:37.083 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:37.083 | 99.99th=[41157] 00:31:37.083 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:31:37.083 slat (nsec): min=3592, max=40142, avg=11089.54, stdev=2254.42 00:31:37.083 clat (usec): min=141, max=3264, avg=229.50, stdev=138.56 00:31:37.083 lat (usec): min=151, max=3281, avg=240.58, stdev=138.89 00:31:37.083 clat percentiles (usec): 00:31:37.083 | 1.00th=[ 155], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 190], 00:31:37.083 | 30.00th=[ 204], 40.00th=[ 217], 50.00th=[ 229], 60.00th=[ 237], 00:31:37.083 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:31:37.083 | 99.00th=[ 293], 99.50th=[ 330], 99.90th=[ 3261], 99.95th=[ 3261], 00:31:37.083 | 99.99th=[ 3261] 00:31:37.083 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:31:37.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:37.083 lat (usec) : 250=73.03%, 500=22.66% 00:31:37.083 lat (msec) : 4=0.19%, 50=4.12% 00:31:37.083 cpu : usr=0.68%, sys=0.58%, ctx=534, majf=0, minf=2 00:31:37.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.083 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.083 job2: (groupid=0, jobs=1): err= 0: pid=3142219: Wed Nov 20 10:04:00 2024 00:31:37.083 read: IOPS=165, BW=663KiB/s (679kB/s)(684KiB/1032msec) 00:31:37.083 slat (nsec): min=6996, max=27375, avg=9851.96, stdev=5059.71 00:31:37.083 clat (usec): min=226, max=41485, avg=5283.93, stdev=13401.08 00:31:37.083 lat (usec): min=233, max=41495, avg=5293.79, stdev=13405.57 00:31:37.083 clat percentiles (usec): 00:31:37.083 | 1.00th=[ 235], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:31:37.083 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:31:37.083 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[41157], 95.00th=[41157], 00:31:37.083 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:37.083 | 99.99th=[41681] 00:31:37.083 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:31:37.083 slat (nsec): min=9828, max=45709, avg=12513.63, stdev=2698.67 00:31:37.083 clat (usec): min=155, max=435, avg=230.99, stdev=32.04 00:31:37.083 lat (usec): min=166, max=481, avg=243.50, stdev=33.00 00:31:37.083 clat percentiles (usec): 00:31:37.083 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 200], 00:31:37.083 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:31:37.083 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 273], 00:31:37.083 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 437], 99.95th=[ 437], 00:31:37.083 | 99.99th=[ 437] 00:31:37.083 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:31:37.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:37.083 lat (usec) : 250=61.05%, 500=35.72%, 750=0.15% 00:31:37.083 lat (msec) : 50=3.07% 00:31:37.083 cpu : usr=0.10%, sys=1.07%, ctx=687, majf=0, minf=1 00:31:37.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.083 issued rwts: total=171,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.083 job3: (groupid=0, jobs=1): err= 0: pid=3142220: Wed Nov 20 10:04:00 2024 00:31:37.083 read: IOPS=23, BW=95.7KiB/s (98.0kB/s)(96.0KiB/1003msec) 00:31:37.083 slat (nsec): min=11452, max=26498, avg=22541.46, stdev=4221.78 00:31:37.083 clat (usec): min=284, max=41130, avg=37574.01, stdev=11477.09 00:31:37.083 lat (usec): min=310, max=41141, avg=37596.55, stdev=11478.00 00:31:37.083 clat percentiles (usec): 00:31:37.083 | 1.00th=[ 285], 5.00th=[ 338], 10.00th=[40633], 20.00th=[40633], 00:31:37.083 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:37.083 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:37.083 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:37.083 | 99.99th=[41157] 00:31:37.083 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:31:37.083 slat (nsec): min=9568, max=38211, avg=10952.81, stdev=2358.95 00:31:37.083 clat (usec): min=156, max=403, avg=183.07, stdev=22.04 00:31:37.083 lat (usec): min=166, max=420, avg=194.03, stdev=23.21 00:31:37.083 clat percentiles (usec): 00:31:37.083 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:31:37.083 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:31:37.083 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 212], 00:31:37.083 | 99.00th=[ 293], 99.50th=[ 326], 99.90th=[ 404], 99.95th=[ 404], 00:31:37.083 | 99.99th=[ 404] 00:31:37.083 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:31:37.083 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:37.083 lat (usec) : 250=93.84%, 500=2.05% 00:31:37.083 lat (msec) : 50=4.10% 00:31:37.083 cpu : usr=0.20%, sys=0.60%, ctx=537, majf=0, minf=1 00:31:37.083 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.083 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.083 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.083 00:31:37.084 Run status group 0 (all jobs): 00:31:37.084 READ: bw=10.5MiB/s (11.0MB/s), 85.7KiB/s-9932KiB/s (87.7kB/s-10.2MB/s), io=10.8MiB (11.4MB), run=1003-1032msec 00:31:37.084 WRITE: bw=15.5MiB/s (16.3MB/s), 1984KiB/s-9932KiB/s (2032kB/s-10.2MB/s), io=16.0MiB (16.8MB), run=1003-1032msec 00:31:37.084 00:31:37.084 Disk stats (read/write): 00:31:37.084 nvme0n1: ios=2091/2532, merge=0/0, ticks=1237/350, in_queue=1587, util=83.87% 00:31:37.084 nvme0n2: ios=67/512, merge=0/0, ticks=756/110, in_queue=866, util=88.67% 00:31:37.084 nvme0n3: ios=189/512, merge=0/0, ticks=1604/113, in_queue=1717, util=91.96% 00:31:37.084 nvme0n4: ios=76/512, merge=0/0, ticks=809/85, in_queue=894, util=94.22% 00:31:37.084 10:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:37.084 [global] 00:31:37.084 thread=1 00:31:37.084 invalidate=1 00:31:37.084 rw=write 00:31:37.084 time_based=1 00:31:37.084 runtime=1 00:31:37.084 ioengine=libaio 00:31:37.084 direct=1 00:31:37.084 bs=4096 00:31:37.084 iodepth=128 00:31:37.084 norandommap=0 00:31:37.084 numjobs=1 00:31:37.084 00:31:37.084 verify_dump=1 00:31:37.084 verify_backlog=512 00:31:37.084 verify_state_save=0 00:31:37.084 do_verify=1 00:31:37.084 verify=crc32c-intel 00:31:37.084 [job0] 00:31:37.084 filename=/dev/nvme0n1 00:31:37.084 [job1] 00:31:37.084 filename=/dev/nvme0n2 00:31:37.084 [job2] 00:31:37.084 filename=/dev/nvme0n3 00:31:37.084 [job3] 00:31:37.084 filename=/dev/nvme0n4 00:31:37.084 Could not set queue depth (nvme0n1) 00:31:37.084 Could not set queue depth (nvme0n2) 00:31:37.084 Could not set queue depth (nvme0n3) 00:31:37.084 Could not set queue depth (nvme0n4) 00:31:37.342 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:37.342 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:37.342 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:37.342 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:37.342 fio-3.35 00:31:37.342 Starting 4 threads 00:31:38.711 00:31:38.712 job0: (groupid=0, jobs=1): err= 0: pid=3142585: Wed Nov 20 10:04:01 2024 00:31:38.712 read: IOPS=2599, BW=10.2MiB/s (10.6MB/s)(10.2MiB/1009msec) 00:31:38.712 slat (nsec): min=1376, max=24657k, avg=194198.96, stdev=1533288.60 00:31:38.712 clat (usec): min=4992, max=69546, avg=25583.88, stdev=15538.49 00:31:38.712 lat (usec): min=6555, max=69574, avg=25778.08, stdev=15662.78 00:31:38.712 clat percentiles (usec): 00:31:38.712 | 1.00th=[ 6587], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[11076], 00:31:38.712 | 30.00th=[12256], 40.00th=[14222], 50.00th=[17957], 60.00th=[31327], 00:31:38.712 | 70.00th=[36439], 80.00th=[40109], 90.00th=[46400], 95.00th=[54264], 00:31:38.712 | 99.00th=[58983], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 00:31:38.712 | 99.99th=[69731] 00:31:38.712 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:31:38.712 slat (usec): min=2, max=24066, avg=153.58, stdev=1265.12 00:31:38.712 clat (usec): min=1481, max=62367, avg=19747.61, stdev=11467.10 00:31:38.712 lat (usec): min=1494, max=62392, avg=19901.18, stdev=11590.62 00:31:38.712 clat percentiles (usec): 00:31:38.712 | 1.00th=[ 5276], 5.00th=[ 7046], 10.00th=[ 8979], 20.00th=[10159], 00:31:38.712 | 30.00th=[10421], 40.00th=[11338], 50.00th=[18220], 60.00th=[20579], 00:31:38.712 | 70.00th=[25560], 80.00th=[29754], 90.00th=[38536], 95.00th=[39584], 00:31:38.712 | 99.00th=[49021], 99.50th=[49021], 99.90th=[56361], 99.95th=[62129], 00:31:38.712 | 99.99th=[62129] 00:31:38.712 bw ( KiB/s): min= 7672, max=16351, per=16.43%, avg=12011.50, stdev=6136.98, samples=2 00:31:38.712 iops : min= 1918, max= 4087, avg=3002.50, stdev=1533.71, samples=2 00:31:38.712 lat (msec) : 2=0.05%, 4=0.11%, 10=15.65%, 20=35.98%, 50=44.20% 00:31:38.712 lat (msec) : 100=4.02% 00:31:38.712 cpu : usr=1.69%, sys=4.56%, ctx=233, majf=0, minf=1 00:31:38.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:38.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.712 issued rwts: total=2623,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.712 job1: (groupid=0, jobs=1): err= 0: pid=3142590: Wed Nov 20 10:04:01 2024 00:31:38.712 read: IOPS=4167, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1009msec) 00:31:38.712 slat (nsec): min=1751, max=22356k, avg=112727.51, stdev=972480.97 00:31:38.712 clat (usec): min=4427, max=53897, avg=15470.33, stdev=4749.71 00:31:38.712 lat (usec): min=4433, max=53915, avg=15583.05, stdev=4830.62 00:31:38.712 clat percentiles (usec): 00:31:38.712 | 1.00th=[ 5800], 5.00th=[ 8586], 10.00th=[11338], 20.00th=[12387], 00:31:38.712 | 30.00th=[13566], 40.00th=[14222], 50.00th=[15008], 60.00th=[15401], 00:31:38.712 | 70.00th=[16057], 80.00th=[17433], 90.00th=[21365], 95.00th=[23987], 00:31:38.712 | 99.00th=[32637], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:31:38.712 | 99.99th=[53740] 00:31:38.712 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:31:38.712 slat (usec): min=2, max=15798, avg=99.84, stdev=808.63 00:31:38.712 clat (usec): min=569, max=30759, avg=13673.99, stdev=4026.81 00:31:38.712 lat (usec): min=578, max=33526, avg=13773.82, stdev=4099.83 00:31:38.712 clat percentiles (usec): 00:31:38.712 | 1.00th=[ 5407], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10421], 00:31:38.712 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12649], 60.00th=[13566], 00:31:38.712 | 70.00th=[14746], 80.00th=[16909], 90.00th=[20579], 95.00th=[20579], 00:31:38.712 | 99.00th=[22938], 99.50th=[26084], 99.90th=[30802], 99.95th=[30802], 00:31:38.712 | 99.99th=[30802] 00:31:38.712 bw ( KiB/s): min=17700, max=18976, per=25.08%, avg=18338.00, stdev=902.27, samples=2 00:31:38.712 iops : min= 4425, max= 4744, avg=4584.50, stdev=225.57, samples=2 00:31:38.712 lat (usec) : 750=0.02% 00:31:38.712 lat (msec) : 2=0.06%, 10=10.51%, 20=76.75%, 50=12.65%, 100=0.01% 00:31:38.712 cpu : usr=4.17%, sys=5.36%, ctx=225, majf=0, minf=1 00:31:38.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:38.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.712 issued rwts: total=4205,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.712 job2: (groupid=0, jobs=1): err= 0: pid=3142593: Wed Nov 20 10:04:01 2024 00:31:38.712 read: IOPS=5229, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1003msec) 00:31:38.712 slat (nsec): min=1374, max=10442k, avg=88196.40, stdev=674829.97 00:31:38.712 clat (usec): min=1212, max=25821, avg=12064.88, stdev=3180.50 00:31:38.712 lat (usec): min=4000, max=25828, avg=12153.07, stdev=3221.15 00:31:38.712 clat percentiles (usec): 00:31:38.712 | 1.00th=[ 6980], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9896], 00:31:38.712 | 30.00th=[10421], 40.00th=[10552], 50.00th=[11469], 60.00th=[12387], 00:31:38.712 | 70.00th=[13173], 80.00th=[14222], 90.00th=[16188], 95.00th=[18482], 00:31:38.712 | 99.00th=[22938], 99.50th=[24249], 99.90th=[25822], 99.95th=[25822], 00:31:38.712 | 99.99th=[25822] 00:31:38.712 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:31:38.712 slat (usec): min=2, max=17062, avg=85.48, stdev=632.23 00:31:38.712 clat (usec): min=483, max=36892, avg=11330.82, stdev=3794.96 00:31:38.712 lat (usec): min=539, max=36957, avg=11416.29, stdev=3831.32 00:31:38.712 clat percentiles (usec): 00:31:38.712 | 1.00th=[ 3228], 5.00th=[ 5932], 10.00th=[ 7373], 20.00th=[ 8717], 00:31:38.712 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11338], 60.00th=[11600], 00:31:38.712 | 70.00th=[11731], 80.00th=[13566], 90.00th=[15926], 95.00th=[18744], 00:31:38.712 | 99.00th=[22414], 99.50th=[23725], 99.90th=[29754], 99.95th=[29754], 00:31:38.712 | 99.99th=[36963] 00:31:38.712 bw ( KiB/s): min=21884, max=23112, per=30.77%, avg=22498.00, stdev=868.33, samples=2 00:31:38.712 iops : min= 5471, max= 5778, avg=5624.50, stdev=217.08, samples=2 00:31:38.712 lat (usec) : 500=0.01%, 750=0.05% 00:31:38.712 lat (msec) : 2=0.09%, 4=0.90%, 10=29.04%, 20=66.88%, 50=3.02% 00:31:38.712 cpu : usr=3.69%, sys=6.19%, ctx=449, majf=0, minf=1 00:31:38.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:38.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.712 issued rwts: total=5245,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.712 job3: (groupid=0, jobs=1): err= 0: pid=3142594: Wed Nov 20 10:04:01 2024 00:31:38.712 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:31:38.712 slat (nsec): min=1513, max=14347k, avg=98274.17, stdev=705715.76 00:31:38.712 clat (usec): min=2790, max=30790, avg=12642.49, stdev=3208.11 00:31:38.712 lat (usec): min=2797, max=30804, avg=12740.76, stdev=3257.30 00:31:38.712 clat percentiles (usec): 00:31:38.712 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:31:38.712 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12649], 00:31:38.712 | 70.00th=[13304], 80.00th=[13960], 90.00th=[16450], 95.00th=[20055], 00:31:38.712 | 99.00th=[25035], 99.50th=[25822], 99.90th=[26608], 99.95th=[26870], 00:31:38.712 | 99.99th=[30802] 00:31:38.712 write: IOPS=5116, BW=20.0MiB/s (21.0MB/s)(20.0MiB/1003msec); 0 zone resets 00:31:38.712 slat (usec): min=2, max=10854, avg=90.66, stdev=551.60 00:31:38.712 clat (usec): min=1544, max=29912, avg=12179.70, stdev=2923.61 00:31:38.712 lat (usec): min=1561, max=34361, avg=12270.36, stdev=2965.58 00:31:38.712 clat percentiles (usec): 00:31:38.712 | 1.00th=[ 3949], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[11207], 00:31:38.712 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:31:38.712 | 70.00th=[12125], 80.00th=[12780], 90.00th=[14615], 95.00th=[17957], 00:31:38.712 | 99.00th=[24511], 99.50th=[24511], 99.90th=[26608], 99.95th=[26608], 00:31:38.712 | 99.99th=[30016] 00:31:38.712 bw ( KiB/s): min=20439, max=20480, per=27.98%, avg=20459.50, stdev=28.99, samples=2 00:31:38.712 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:31:38.712 lat (msec) : 2=0.17%, 4=0.50%, 10=11.39%, 20=84.10%, 50=3.84% 00:31:38.712 cpu : usr=5.29%, sys=5.09%, ctx=449, majf=0, minf=1 00:31:38.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:38.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.712 issued rwts: total=5120,5132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.712 00:31:38.712 Run status group 0 (all jobs): 00:31:38.712 READ: bw=66.6MiB/s (69.8MB/s), 10.2MiB/s-20.4MiB/s (10.6MB/s-21.4MB/s), io=67.2MiB (70.4MB), run=1003-1009msec 00:31:38.712 WRITE: bw=71.4MiB/s (74.9MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=72.0MiB (75.5MB), run=1003-1009msec 00:31:38.712 00:31:38.712 Disk stats (read/write): 00:31:38.712 nvme0n1: ios=2380/2560, merge=0/0, ticks=37434/31192, in_queue=68626, util=99.80% 00:31:38.712 nvme0n2: ios=3633/3932, merge=0/0, ticks=51045/46547, in_queue=97592, util=87.92% 00:31:38.712 nvme0n3: ios=4665/4630, merge=0/0, ticks=50361/47426, in_queue=97787, util=91.36% 00:31:38.712 nvme0n4: ios=4153/4514, merge=0/0, ticks=34411/33969, in_queue=68380, util=95.28% 00:31:38.712 10:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:38.712 [global] 00:31:38.712 thread=1 00:31:38.712 invalidate=1 00:31:38.712 rw=randwrite 00:31:38.712 time_based=1 00:31:38.712 runtime=1 00:31:38.712 ioengine=libaio 00:31:38.712 direct=1 00:31:38.712 bs=4096 00:31:38.712 iodepth=128 00:31:38.712 norandommap=0 00:31:38.712 numjobs=1 00:31:38.712 00:31:38.712 verify_dump=1 00:31:38.712 verify_backlog=512 00:31:38.712 verify_state_save=0 00:31:38.712 do_verify=1 00:31:38.712 verify=crc32c-intel 00:31:38.712 [job0] 00:31:38.712 filename=/dev/nvme0n1 00:31:38.712 [job1] 00:31:38.712 filename=/dev/nvme0n2 00:31:38.712 [job2] 00:31:38.712 filename=/dev/nvme0n3 00:31:38.712 [job3] 00:31:38.712 filename=/dev/nvme0n4 00:31:38.712 Could not set queue depth (nvme0n1) 00:31:38.713 Could not set queue depth (nvme0n2) 00:31:38.713 Could not set queue depth (nvme0n3) 00:31:38.713 Could not set queue depth (nvme0n4) 00:31:38.969 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.969 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.969 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.969 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.969 fio-3.35 00:31:38.969 Starting 4 threads 00:31:40.364 00:31:40.364 job0: (groupid=0, jobs=1): err= 0: pid=3142961: Wed Nov 20 10:04:03 2024 00:31:40.364 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:31:40.364 slat (nsec): min=1505, max=23071k, avg=175357.93, stdev=1280313.69 00:31:40.365 clat (usec): min=5619, max=98373, avg=20039.06, stdev=14828.01 00:31:40.365 lat (usec): min=5631, max=98383, avg=20214.42, stdev=14952.80 00:31:40.365 clat percentiles (usec): 00:31:40.365 | 1.00th=[ 7832], 5.00th=[10028], 10.00th=[10683], 20.00th=[11207], 00:31:40.365 | 30.00th=[11469], 40.00th=[11600], 50.00th=[13435], 60.00th=[18220], 00:31:40.365 | 70.00th=[21627], 80.00th=[26346], 90.00th=[32113], 95.00th=[44827], 00:31:40.365 | 99.00th=[92799], 99.50th=[95945], 99.90th=[98042], 99.95th=[98042], 00:31:40.365 | 99.99th=[98042] 00:31:40.365 write: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1014msec); 0 zone resets 00:31:40.365 slat (usec): min=2, max=19341, avg=123.71, stdev=880.07 00:31:40.365 clat (usec): min=3026, max=98339, avg=18841.79, stdev=10491.43 00:31:40.365 lat (usec): min=3036, max=98343, avg=18965.50, stdev=10530.75 00:31:40.365 clat percentiles (usec): 00:31:40.365 | 1.00th=[ 5407], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:31:40.365 | 30.00th=[11469], 40.00th=[15795], 50.00th=[18744], 60.00th=[20579], 00:31:40.365 | 70.00th=[21890], 80.00th=[22152], 90.00th=[28181], 95.00th=[32637], 00:31:40.365 | 99.00th=[64226], 99.50th=[68682], 99.90th=[77071], 99.95th=[98042], 00:31:40.365 | 99.99th=[98042] 00:31:40.365 bw ( KiB/s): min=11720, max=15536, per=18.93%, avg=13628.00, stdev=2698.32, samples=2 00:31:40.365 iops : min= 2930, max= 3884, avg=3407.00, stdev=674.58, samples=2 00:31:40.365 lat (msec) : 4=0.08%, 10=10.54%, 20=47.93%, 50=37.71%, 100=3.75% 00:31:40.365 cpu : usr=3.75%, sys=3.06%, ctx=269, majf=0, minf=1 00:31:40.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:31:40.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.365 issued rwts: total=3072,3534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.365 job1: (groupid=0, jobs=1): err= 0: pid=3142962: Wed Nov 20 10:04:03 2024 00:31:40.365 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:31:40.365 slat (nsec): min=1381, max=3526.6k, avg=82386.13, stdev=433606.24 00:31:40.365 clat (usec): min=7332, max=13946, avg=10581.62, stdev=982.53 00:31:40.365 lat (usec): min=7436, max=13953, avg=10664.01, stdev=1022.72 00:31:40.365 clat percentiles (usec): 00:31:40.365 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[ 9896], 00:31:40.365 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:31:40.365 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11863], 95.00th=[12518], 00:31:40.365 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13698], 99.95th=[13960], 00:31:40.365 | 99.99th=[13960] 00:31:40.365 write: IOPS=5992, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1003msec); 0 zone resets 00:31:40.365 slat (usec): min=2, max=19213, avg=82.66, stdev=517.89 00:31:40.365 clat (usec): min=2830, max=44357, avg=11134.26, stdev=3210.71 00:31:40.365 lat (usec): min=3511, max=44400, avg=11216.92, stdev=3248.01 00:31:40.365 clat percentiles (usec): 00:31:40.365 | 1.00th=[ 6849], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:31:40.365 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10683], 00:31:40.365 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11207], 95.00th=[13435], 00:31:40.365 | 99.00th=[31589], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:31:40.365 | 99.99th=[44303] 00:31:40.365 bw ( KiB/s): min=23168, max=23896, per=32.69%, avg=23532.00, stdev=514.77, samples=2 00:31:40.365 iops : min= 5792, max= 5974, avg=5883.00, stdev=128.69, samples=2 00:31:40.365 lat (msec) : 4=0.08%, 10=16.78%, 20=81.23%, 50=1.92% 00:31:40.365 cpu : usr=3.89%, sys=8.48%, ctx=521, majf=0, minf=1 00:31:40.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:40.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.365 issued rwts: total=5632,6010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.365 job2: (groupid=0, jobs=1): err= 0: pid=3142969: Wed Nov 20 10:04:03 2024 00:31:40.365 read: IOPS=5466, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1006msec) 00:31:40.365 slat (nsec): min=1458, max=11191k, avg=92402.79, stdev=766670.10 00:31:40.365 clat (usec): min=1994, max=27732, avg=12033.35, stdev=2907.74 00:31:40.365 lat (usec): min=4294, max=27740, avg=12125.75, stdev=2972.56 00:31:40.365 clat percentiles (usec): 00:31:40.365 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10159], 00:31:40.365 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11731], 00:31:40.365 | 70.00th=[12387], 80.00th=[13698], 90.00th=[16188], 95.00th=[18482], 00:31:40.365 | 99.00th=[20841], 99.50th=[22414], 99.90th=[27657], 99.95th=[27657], 00:31:40.365 | 99.99th=[27657] 00:31:40.365 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:31:40.365 slat (usec): min=2, max=10551, avg=80.82, stdev=652.64 00:31:40.365 clat (usec): min=1551, max=22040, avg=10832.29, stdev=2524.27 00:31:40.365 lat (usec): min=1566, max=22062, avg=10913.11, stdev=2571.37 00:31:40.365 clat percentiles (usec): 00:31:40.365 | 1.00th=[ 5342], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 8979], 00:31:40.365 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:31:40.365 | 70.00th=[11600], 80.00th=[11994], 90.00th=[15008], 95.00th=[15401], 00:31:40.365 | 99.00th=[16909], 99.50th=[17433], 99.90th=[20579], 99.95th=[21890], 00:31:40.365 | 99.99th=[22152] 00:31:40.365 bw ( KiB/s): min=21984, max=23072, per=31.30%, avg=22528.00, stdev=769.33, samples=2 00:31:40.365 iops : min= 5496, max= 5768, avg=5632.00, stdev=192.33, samples=2 00:31:40.365 lat (msec) : 2=0.04%, 4=0.19%, 10=23.95%, 20=74.68%, 50=1.14% 00:31:40.365 cpu : usr=4.78%, sys=7.46%, ctx=301, majf=0, minf=2 00:31:40.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:40.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.365 issued rwts: total=5499,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.365 job3: (groupid=0, jobs=1): err= 0: pid=3142973: Wed Nov 20 10:04:03 2024 00:31:40.365 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.1MiB/1014msec) 00:31:40.365 slat (nsec): min=1422, max=19500k, avg=160859.66, stdev=1245610.78 00:31:40.365 clat (usec): min=4192, max=39490, avg=18792.99, stdev=5541.84 00:31:40.365 lat (usec): min=4203, max=45216, avg=18953.85, stdev=5675.49 00:31:40.365 clat percentiles (usec): 00:31:40.365 | 1.00th=[ 7832], 5.00th=[12256], 10.00th=[12649], 20.00th=[13304], 00:31:40.365 | 30.00th=[13960], 40.00th=[17171], 50.00th=[19268], 60.00th=[20055], 00:31:40.365 | 70.00th=[20841], 80.00th=[22414], 90.00th=[26084], 95.00th=[30278], 00:31:40.365 | 99.00th=[34866], 99.50th=[34866], 99.90th=[39584], 99.95th=[39584], 00:31:40.365 | 99.99th=[39584] 00:31:40.365 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec); 0 zone resets 00:31:40.365 slat (usec): min=2, max=18054, avg=182.75, stdev=1110.06 00:31:40.365 clat (usec): min=2964, max=86039, avg=25860.76, stdev=14866.54 00:31:40.365 lat (usec): min=2976, max=86050, avg=26043.51, stdev=14946.32 00:31:40.365 clat percentiles (usec): 00:31:40.365 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[14746], 20.00th=[17433], 00:31:40.365 | 30.00th=[20055], 40.00th=[21103], 50.00th=[21627], 60.00th=[21890], 00:31:40.365 | 70.00th=[23462], 80.00th=[29230], 90.00th=[49021], 95.00th=[63701], 00:31:40.365 | 99.00th=[80217], 99.50th=[82314], 99.90th=[86508], 99.95th=[86508], 00:31:40.365 | 99.99th=[86508] 00:31:40.365 bw ( KiB/s): min=11600, max=12200, per=16.53%, avg=11900.00, stdev=424.26, samples=2 00:31:40.365 iops : min= 2900, max= 3050, avg=2975.00, stdev=106.07, samples=2 00:31:40.365 lat (msec) : 4=0.23%, 10=2.24%, 20=39.67%, 50=52.61%, 100=5.25% 00:31:40.365 cpu : usr=2.86%, sys=3.75%, ctx=258, majf=0, minf=1 00:31:40.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:31:40.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.365 issued rwts: total=2590,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.365 00:31:40.365 Run status group 0 (all jobs): 00:31:40.365 READ: bw=64.7MiB/s (67.8MB/s), 9.98MiB/s-21.9MiB/s (10.5MB/s-23.0MB/s), io=65.6MiB (68.8MB), run=1003-1014msec 00:31:40.365 WRITE: bw=70.3MiB/s (73.7MB/s), 11.8MiB/s-23.4MiB/s (12.4MB/s-24.5MB/s), io=71.3MiB (74.7MB), run=1003-1014msec 00:31:40.365 00:31:40.365 Disk stats (read/write): 00:31:40.365 nvme0n1: ios=2412/2560, merge=0/0, ticks=53019/49622, in_queue=102641, util=99.50% 00:31:40.365 nvme0n2: ios=4640/4959, merge=0/0, ticks=16262/17984, in_queue=34246, util=98.26% 00:31:40.365 nvme0n3: ios=4519/4608, merge=0/0, ticks=53061/48648, in_queue=101709, util=97.56% 00:31:40.365 nvme0n4: ios=2355/2560, merge=0/0, ticks=41611/60712, in_queue=102323, util=97.63% 00:31:40.365 10:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:40.365 10:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3143190 00:31:40.365 10:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:40.365 10:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:40.365 [global] 00:31:40.365 thread=1 00:31:40.365 invalidate=1 00:31:40.365 rw=read 00:31:40.365 time_based=1 00:31:40.365 runtime=10 00:31:40.365 ioengine=libaio 00:31:40.365 direct=1 00:31:40.365 bs=4096 00:31:40.365 iodepth=1 00:31:40.365 norandommap=1 00:31:40.365 numjobs=1 00:31:40.365 00:31:40.365 [job0] 00:31:40.365 filename=/dev/nvme0n1 00:31:40.365 [job1] 00:31:40.365 filename=/dev/nvme0n2 00:31:40.365 [job2] 00:31:40.365 filename=/dev/nvme0n3 00:31:40.365 [job3] 00:31:40.365 filename=/dev/nvme0n4 00:31:40.365 Could not set queue depth (nvme0n1) 00:31:40.365 Could not set queue depth (nvme0n2) 00:31:40.365 Could not set queue depth (nvme0n3) 00:31:40.365 Could not set queue depth (nvme0n4) 00:31:40.629 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.629 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.629 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.629 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:40.629 fio-3.35 00:31:40.629 Starting 4 threads 00:31:43.153 10:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:43.410 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=43143168, buflen=4096 00:31:43.410 fio: pid=3143401, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:43.410 10:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:43.668 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46686208, buflen=4096 00:31:43.668 fio: pid=3143394, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:43.668 10:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:43.668 10:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:43.925 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47726592, buflen=4096 00:31:43.925 fio: pid=3143363, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:43.925 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:43.925 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:44.183 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1744896, buflen=4096 00:31:44.183 fio: pid=3143377, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:31:44.183 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:44.183 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:44.183 00:31:44.183 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3143363: Wed Nov 20 10:04:07 2024 00:31:44.183 read: IOPS=3674, BW=14.4MiB/s (15.1MB/s)(45.5MiB/3171msec) 00:31:44.183 slat (usec): min=6, max=27430, avg=11.19, stdev=266.55 00:31:44.183 clat (usec): min=183, max=1667, avg=256.37, stdev=30.01 00:31:44.183 lat (usec): min=194, max=27698, avg=267.56, stdev=268.85 00:31:44.183 clat percentiles (usec): 00:31:44.183 | 1.00th=[ 198], 5.00th=[ 217], 10.00th=[ 235], 20.00th=[ 241], 00:31:44.183 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 258], 00:31:44.183 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:31:44.183 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 469], 99.95th=[ 494], 00:31:44.183 | 99.99th=[ 1401] 00:31:44.183 bw ( KiB/s): min=13864, max=15520, per=36.78%, avg=14818.17, stdev=757.45, samples=6 00:31:44.183 iops : min= 3466, max= 3880, avg=3704.50, stdev=189.35, samples=6 00:31:44.183 lat (usec) : 250=45.61%, 500=54.34%, 750=0.03% 00:31:44.183 lat (msec) : 2=0.02% 00:31:44.183 cpu : usr=2.05%, sys=5.99%, ctx=11656, majf=0, minf=1 00:31:44.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.183 issued rwts: total=11653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:44.183 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3143377: Wed Nov 20 10:04:07 2024 00:31:44.183 read: IOPS=126, BW=505KiB/s (517kB/s)(1704KiB/3377msec) 00:31:44.183 slat (usec): min=6, max=12906, avg=87.01, stdev=944.30 00:31:44.183 clat (usec): min=192, max=42037, avg=7837.75, stdev=15886.24 00:31:44.183 lat (usec): min=200, max=54163, avg=7908.07, stdev=16037.23 00:31:44.183 clat percentiles (usec): 00:31:44.183 | 1.00th=[ 210], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 239], 00:31:44.183 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 273], 00:31:44.183 | 70.00th=[ 289], 80.00th=[ 469], 90.00th=[41157], 95.00th=[41157], 00:31:44.183 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:31:44.183 | 99.99th=[42206] 00:31:44.183 bw ( KiB/s): min= 224, max= 1672, per=1.36%, avg=546.17, stdev=553.99, samples=6 00:31:44.183 iops : min= 56, max= 418, avg=136.50, stdev=138.52, samples=6 00:31:44.183 lat (usec) : 250=35.60%, 500=45.43%, 750=0.23% 00:31:44.183 lat (msec) : 50=18.50% 00:31:44.183 cpu : usr=0.00%, sys=0.36%, ctx=430, majf=0, minf=2 00:31:44.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.183 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.183 issued rwts: total=427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:44.183 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3143394: Wed Nov 20 10:04:07 2024 00:31:44.183 read: IOPS=3870, BW=15.1MiB/s (15.9MB/s)(44.5MiB/2945msec) 00:31:44.183 slat (nsec): min=7870, max=36581, avg=9089.15, stdev=1025.68 00:31:44.183 clat (usec): min=209, max=570, avg=245.79, stdev=13.68 00:31:44.183 lat (usec): min=218, max=606, avg=254.88, stdev=13.78 00:31:44.183 clat percentiles (usec): 00:31:44.183 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:31:44.183 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:31:44.183 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 262], 00:31:44.183 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 424], 99.95th=[ 461], 00:31:44.183 | 99.99th=[ 502] 00:31:44.183 bw ( KiB/s): min=15496, max=15736, per=38.81%, avg=15632.00, stdev=100.88, samples=5 00:31:44.183 iops : min= 3874, max= 3934, avg=3908.00, stdev=25.22, samples=5 00:31:44.183 lat (usec) : 250=73.10%, 500=26.88%, 750=0.01% 00:31:44.183 cpu : usr=1.12%, sys=4.52%, ctx=11399, majf=0, minf=1 00:31:44.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.183 issued rwts: total=11399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:44.183 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3143401: Wed Nov 20 10:04:07 2024 00:31:44.183 read: IOPS=3860, BW=15.1MiB/s (15.8MB/s)(41.1MiB/2729msec) 00:31:44.183 slat (nsec): min=7099, max=45410, avg=8436.74, stdev=1333.36 00:31:44.183 clat (usec): min=214, max=488, avg=246.69, stdev=13.14 00:31:44.183 lat (usec): min=222, max=497, avg=255.12, stdev=13.23 00:31:44.183 clat percentiles (usec): 00:31:44.183 | 1.00th=[ 227], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:31:44.183 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:31:44.183 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:31:44.183 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 351], 99.95th=[ 379], 00:31:44.183 | 99.99th=[ 482] 00:31:44.183 bw ( KiB/s): min=15304, max=15760, per=38.70%, avg=15588.80, stdev=184.40, samples=5 00:31:44.183 iops : min= 3826, max= 3940, avg=3897.20, stdev=46.10, samples=5 00:31:44.183 lat (usec) : 250=67.81%, 500=32.18% 00:31:44.183 cpu : usr=2.02%, sys=6.56%, ctx=10534, majf=0, minf=2 00:31:44.183 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.183 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.183 issued rwts: total=10534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.183 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:44.183 00:31:44.183 Run status group 0 (all jobs): 00:31:44.183 READ: bw=39.3MiB/s (41.2MB/s), 505KiB/s-15.1MiB/s (517kB/s-15.9MB/s), io=133MiB (139MB), run=2729-3377msec 00:31:44.183 00:31:44.183 Disk stats (read/write): 00:31:44.183 nvme0n1: ios=11489/0, merge=0/0, ticks=2801/0, in_queue=2801, util=94.61% 00:31:44.183 nvme0n2: ios=425/0, merge=0/0, ticks=3298/0, in_queue=3298, util=95.66% 00:31:44.184 nvme0n3: ios=11127/0, merge=0/0, ticks=2672/0, in_queue=2672, util=96.52% 00:31:44.184 nvme0n4: ios=10133/0, merge=0/0, ticks=2357/0, in_queue=2357, util=96.48% 00:31:44.184 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:44.184 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:44.441 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:44.441 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:44.698 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:44.698 10:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:45.011 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:45.011 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:45.011 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:45.011 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3143190 00:31:45.011 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:45.011 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:45.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:45.284 nvmf hotplug test: fio failed as expected 00:31:45.284 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.591 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:45.591 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:45.591 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:45.591 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:45.591 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:45.591 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.591 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:45.591 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.592 rmmod nvme_tcp 00:31:45.592 rmmod nvme_fabrics 00:31:45.592 rmmod nvme_keyring 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3140719 ']' 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3140719 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3140719 ']' 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3140719 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3140719 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3140719' 00:31:45.592 killing process with pid 3140719 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3140719 00:31:45.592 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3140719 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.870 10:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.775 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.775 00:31:47.775 real 0m25.886s 00:31:47.775 user 1m31.402s 00:31:47.775 sys 0m11.761s 00:31:47.775 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.775 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:47.775 ************************************ 00:31:47.775 END TEST nvmf_fio_target 00:31:47.775 ************************************ 00:31:47.775 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:47.775 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:47.775 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.775 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.775 ************************************ 00:31:47.775 START TEST nvmf_bdevio 00:31:47.775 ************************************ 00:31:47.776 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:48.036 * Looking for test storage... 00:31:48.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1703 -- # lcov --version 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:31:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.036 --rc genhtml_branch_coverage=1 00:31:48.036 --rc genhtml_function_coverage=1 00:31:48.036 --rc genhtml_legend=1 00:31:48.036 --rc geninfo_all_blocks=1 00:31:48.036 --rc geninfo_unexecuted_blocks=1 00:31:48.036 00:31:48.036 ' 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:31:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.036 --rc genhtml_branch_coverage=1 00:31:48.036 --rc genhtml_function_coverage=1 00:31:48.036 --rc genhtml_legend=1 00:31:48.036 --rc geninfo_all_blocks=1 00:31:48.036 --rc geninfo_unexecuted_blocks=1 00:31:48.036 00:31:48.036 ' 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:31:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.036 --rc genhtml_branch_coverage=1 00:31:48.036 --rc genhtml_function_coverage=1 00:31:48.036 --rc genhtml_legend=1 00:31:48.036 --rc geninfo_all_blocks=1 00:31:48.036 --rc geninfo_unexecuted_blocks=1 00:31:48.036 00:31:48.036 ' 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:31:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.036 --rc genhtml_branch_coverage=1 00:31:48.036 --rc genhtml_function_coverage=1 00:31:48.036 --rc genhtml_legend=1 00:31:48.036 --rc geninfo_all_blocks=1 00:31:48.036 --rc geninfo_unexecuted_blocks=1 00:31:48.036 00:31:48.036 ' 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.036 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:48.037 10:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:54.605 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:54.605 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:54.605 Found net devices under 0000:86:00.0: cvl_0_0 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:54.605 Found net devices under 0000:86:00.1: cvl_0_1 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.605 10:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.605 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:54.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:31:54.606 00:31:54.606 --- 10.0.0.2 ping statistics --- 00:31:54.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.606 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:31:54.606 00:31:54.606 --- 10.0.0.1 ping statistics --- 00:31:54.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.606 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3147713 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3147713 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3147713 ']' 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 [2024-11-20 10:04:17.221764] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:54.606 [2024-11-20 10:04:17.222728] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:31:54.606 [2024-11-20 10:04:17.222763] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.606 [2024-11-20 10:04:17.302097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:54.606 [2024-11-20 10:04:17.345713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.606 [2024-11-20 10:04:17.345749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.606 [2024-11-20 10:04:17.345757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.606 [2024-11-20 10:04:17.345764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.606 [2024-11-20 10:04:17.345770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.606 [2024-11-20 10:04:17.347411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:54.606 [2024-11-20 10:04:17.347497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:54.606 [2024-11-20 10:04:17.347583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.606 [2024-11-20 10:04:17.347584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:54.606 [2024-11-20 10:04:17.413910] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:54.606 [2024-11-20 10:04:17.414720] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:54.606 [2024-11-20 10:04:17.414972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:54.606 [2024-11-20 10:04:17.415404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:54.606 [2024-11-20 10:04:17.415441] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 [2024-11-20 10:04:17.492334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 Malloc0 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:54.606 [2024-11-20 10:04:17.572496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:54.606 { 00:31:54.606 "params": { 00:31:54.606 "name": "Nvme$subsystem", 00:31:54.606 "trtype": "$TEST_TRANSPORT", 00:31:54.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:54.606 "adrfam": "ipv4", 00:31:54.606 "trsvcid": "$NVMF_PORT", 00:31:54.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:54.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:54.606 "hdgst": ${hdgst:-false}, 00:31:54.606 "ddgst": ${ddgst:-false} 00:31:54.606 }, 00:31:54.606 "method": "bdev_nvme_attach_controller" 00:31:54.606 } 00:31:54.606 EOF 00:31:54.606 )") 00:31:54.606 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:54.607 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:54.607 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:54.607 10:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:54.607 "params": { 00:31:54.607 "name": "Nvme1", 00:31:54.607 "trtype": "tcp", 00:31:54.607 "traddr": "10.0.0.2", 00:31:54.607 "adrfam": "ipv4", 00:31:54.607 "trsvcid": "4420", 00:31:54.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:54.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:54.607 "hdgst": false, 00:31:54.607 "ddgst": false 00:31:54.607 }, 00:31:54.607 "method": "bdev_nvme_attach_controller" 00:31:54.607 }' 00:31:54.607 [2024-11-20 10:04:17.621602] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:31:54.607 [2024-11-20 10:04:17.621647] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147824 ] 00:31:54.607 [2024-11-20 10:04:17.696631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:54.607 [2024-11-20 10:04:17.740842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.607 [2024-11-20 10:04:17.740954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.607 [2024-11-20 10:04:17.740962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.607 I/O targets: 00:31:54.607 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:54.607 00:31:54.607 00:31:54.607 CUnit - A unit testing framework for C - Version 2.1-3 00:31:54.607 http://cunit.sourceforge.net/ 00:31:54.607 00:31:54.607 00:31:54.607 Suite: bdevio tests on: Nvme1n1 00:31:54.864 Test: blockdev write read block ...passed 00:31:54.864 Test: blockdev write zeroes read block ...passed 00:31:54.864 Test: blockdev write zeroes read no split ...passed 00:31:54.864 Test: blockdev write zeroes read split ...passed 00:31:54.864 Test: blockdev write zeroes read split partial ...passed 00:31:54.864 Test: blockdev reset ...[2024-11-20 10:04:18.081495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:54.864 [2024-11-20 10:04:18.081560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e2340 (9): Bad file descriptor 00:31:54.864 [2024-11-20 10:04:18.092586] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:54.864 passed 00:31:54.864 Test: blockdev write read 8 blocks ...passed 00:31:54.864 Test: blockdev write read size > 128k ...passed 00:31:54.864 Test: blockdev write read invalid size ...passed 00:31:54.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:54.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:54.864 Test: blockdev write read max offset ...passed 00:31:55.123 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:55.123 Test: blockdev writev readv 8 blocks ...passed 00:31:55.123 Test: blockdev writev readv 30 x 1block ...passed 00:31:55.123 Test: blockdev writev readv block ...passed 00:31:55.123 Test: blockdev writev readv size > 128k ...passed 00:31:55.123 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:55.123 Test: blockdev comparev and writev ...[2024-11-20 10:04:18.303824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.123 [2024-11-20 10:04:18.303852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.303866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.123 [2024-11-20 10:04:18.303874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.304177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.123 [2024-11-20 10:04:18.304193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.304205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.123 [2024-11-20 10:04:18.304212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.304500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.123 [2024-11-20 10:04:18.304512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.304524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.123 [2024-11-20 10:04:18.304532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.304819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.123 [2024-11-20 10:04:18.304831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.304843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:55.123 [2024-11-20 10:04:18.304851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:55.123 passed 00:31:55.123 Test: blockdev nvme passthru rw ...passed 00:31:55.123 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:04:18.387299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.123 [2024-11-20 10:04:18.387319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.387427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.123 [2024-11-20 10:04:18.387438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.387555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.123 [2024-11-20 10:04:18.387565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:55.123 [2024-11-20 10:04:18.387676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:55.123 [2024-11-20 10:04:18.387689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:55.123 passed 00:31:55.123 Test: blockdev nvme admin passthru ...passed 00:31:55.123 Test: blockdev copy ...passed 00:31:55.123 00:31:55.123 Run Summary: Type Total Ran Passed Failed Inactive 00:31:55.123 suites 1 1 n/a 0 0 00:31:55.123 tests 23 23 23 0 0 00:31:55.123 asserts 152 152 152 0 n/a 00:31:55.123 00:31:55.123 Elapsed time = 1.029 seconds 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.382 rmmod nvme_tcp 00:31:55.382 rmmod nvme_fabrics 00:31:55.382 rmmod nvme_keyring 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3147713 ']' 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3147713 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3147713 ']' 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3147713 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3147713 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3147713' 00:31:55.382 killing process with pid 3147713 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3147713 00:31:55.382 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3147713 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.641 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.642 10:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.179 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.179 00:31:58.179 real 0m9.883s 00:31:58.179 user 0m8.470s 00:31:58.179 sys 0m5.120s 00:31:58.179 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.179 10:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:58.179 ************************************ 00:31:58.179 END TEST nvmf_bdevio 00:31:58.179 ************************************ 00:31:58.179 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:58.179 00:31:58.179 real 4m34.190s 00:31:58.179 user 9m9.291s 00:31:58.179 sys 1m52.372s 00:31:58.179 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.179 10:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:58.179 ************************************ 00:31:58.179 END TEST nvmf_target_core_interrupt_mode 00:31:58.179 ************************************ 00:31:58.179 10:04:21 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:58.179 10:04:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:58.179 10:04:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:58.179 10:04:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:58.179 ************************************ 00:31:58.179 START TEST nvmf_interrupt 00:31:58.179 ************************************ 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:58.179 * Looking for test storage... 00:31:58.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1703 -- # lcov --version 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:31:58.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.179 --rc genhtml_branch_coverage=1 00:31:58.179 --rc genhtml_function_coverage=1 00:31:58.179 --rc genhtml_legend=1 00:31:58.179 --rc geninfo_all_blocks=1 00:31:58.179 --rc geninfo_unexecuted_blocks=1 00:31:58.179 00:31:58.179 ' 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:31:58.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.179 --rc genhtml_branch_coverage=1 00:31:58.179 --rc genhtml_function_coverage=1 00:31:58.179 --rc genhtml_legend=1 00:31:58.179 --rc geninfo_all_blocks=1 00:31:58.179 --rc geninfo_unexecuted_blocks=1 00:31:58.179 00:31:58.179 ' 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:31:58.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.179 --rc genhtml_branch_coverage=1 00:31:58.179 --rc genhtml_function_coverage=1 00:31:58.179 --rc genhtml_legend=1 00:31:58.179 --rc geninfo_all_blocks=1 00:31:58.179 --rc geninfo_unexecuted_blocks=1 00:31:58.179 00:31:58.179 ' 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:31:58.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.179 --rc genhtml_branch_coverage=1 00:31:58.179 --rc genhtml_function_coverage=1 00:31:58.179 --rc genhtml_legend=1 00:31:58.179 --rc geninfo_all_blocks=1 00:31:58.179 --rc geninfo_unexecuted_blocks=1 00:31:58.179 00:31:58.179 ' 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.179 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.180 10:04:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:04.749 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:04.750 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:04.750 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:04.750 Found net devices under 0000:86:00.0: cvl_0_0 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:04.750 Found net devices under 0000:86:00.1: cvl_0_1 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.750 10:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:04.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:32:04.750 00:32:04.750 --- 10.0.0.2 ping statistics --- 00:32:04.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.750 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:32:04.750 00:32:04.750 --- 10.0.0.1 ping statistics --- 00:32:04.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.750 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3151379 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3151379 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3151379 ']' 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.750 10:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:04.751 10:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:04.751 [2024-11-20 10:04:27.267694] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:04.751 [2024-11-20 10:04:27.268645] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:32:04.751 [2024-11-20 10:04:27.268680] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.751 [2024-11-20 10:04:27.347470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:04.751 [2024-11-20 10:04:27.391338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:04.751 [2024-11-20 10:04:27.391375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:04.751 [2024-11-20 10:04:27.391387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:04.751 [2024-11-20 10:04:27.391393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:04.751 [2024-11-20 10:04:27.391398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:04.751 [2024-11-20 10:04:27.392560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.751 [2024-11-20 10:04:27.392563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.751 [2024-11-20 10:04:27.460292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:04.751 [2024-11-20 10:04:27.460857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:04.751 [2024-11-20 10:04:27.461058] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:05.010 5000+0 records in 00:32:05.010 5000+0 records out 00:32:05.010 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0170569 s, 600 MB/s 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.010 AIO0 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.010 [2024-11-20 10:04:28.197329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:05.010 [2024-11-20 10:04:28.237749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3151379 0 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3151379 0 idle 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3151379 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3151379 -w 256 00:32:05.010 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3151379 root 20 0 128.2g 44544 33024 S 0.0 0.0 0:00.27 reactor_0' 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3151379 root 20 0 128.2g 44544 33024 S 0.0 0.0 0:00.27 reactor_0 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3151379 1 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3151379 1 idle 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3151379 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3151379 -w 256 00:32:05.269 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3151420 root 20 0 128.2g 44544 33024 S 0.0 0.0 0:00.00 reactor_1' 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3151420 root 20 0 128.2g 44544 33024 S 0.0 0.0 0:00.00 reactor_1 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3151645 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3151379 0 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3151379 0 busy 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3151379 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3151379 -w 256 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3151379 root 20 0 128.2g 45312 33024 R 12.5 0.0 0:00.29 reactor_0' 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3151379 root 20 0 128.2g 45312 33024 R 12.5 0.0 0:00.29 reactor_0 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=12.5 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=12 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:05.528 10:04:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3151379 -w 256 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3151379 root 20 0 128.2g 45312 33024 R 99.9 0.0 0:02.65 reactor_0' 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3151379 root 20 0 128.2g 45312 33024 R 99.9 0.0 0:02.65 reactor_0 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3151379 1 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3151379 1 busy 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3151379 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:06.903 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.904 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.904 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.904 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3151379 -w 256 00:32:06.904 10:04:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3151420 root 20 0 128.2g 45312 33024 R 87.5 0.0 0:01.37 reactor_1' 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3151420 root 20 0 128.2g 45312 33024 R 87.5 0.0 0:01.37 reactor_1 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.904 10:04:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3151645 00:32:16.862 Initializing NVMe Controllers 00:32:16.862 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:16.862 Controller IO queue size 256, less than required. 00:32:16.862 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:16.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:16.862 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:16.862 Initialization complete. Launching workers. 00:32:16.862 ======================================================== 00:32:16.862 Latency(us) 00:32:16.862 Device Information : IOPS MiB/s Average min max 00:32:16.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16161.20 63.13 15848.57 3092.72 31748.05 00:32:16.862 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16302.70 63.68 15707.55 7405.51 28292.92 00:32:16.862 ======================================================== 00:32:16.862 Total : 32463.90 126.81 15777.76 3092.72 31748.05 00:32:16.862 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3151379 0 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3151379 0 idle 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3151379 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3151379 -w 256 00:32:16.862 10:04:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3151379 root 20 0 128.2g 45312 33024 S 0.0 0.0 0:20.26 reactor_0' 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3151379 root 20 0 128.2g 45312 33024 S 0.0 0.0 0:20.26 reactor_0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3151379 1 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3151379 1 idle 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3151379 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3151379 -w 256 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3151420 root 20 0 128.2g 45312 33024 S 0.0 0.0 0:10.00 reactor_1' 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3151420 root 20 0 128.2g 45312 33024 S 0.0 0.0 0:10.00 reactor_1 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:16.862 10:04:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3151379 0 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3151379 0 idle 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3151379 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3151379 -w 256 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3151379 root 20 0 128.2g 71424 33024 S 0.0 0.0 0:20.51 reactor_0' 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3151379 root 20 0 128.2g 71424 33024 S 0.0 0.0 0:20.51 reactor_0 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3151379 1 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3151379 1 idle 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3151379 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:18.768 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:18.769 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:18.769 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:18.769 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:18.769 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:18.769 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:18.769 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:18.769 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:18.769 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3151379 -w 256 00:32:18.769 10:04:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3151420 root 20 0 128.2g 71424 33024 S 0.0 0.0 0:10.10 reactor_1' 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3151420 root 20 0 128.2g 71424 33024 S 0.0 0.0 0:10.10 reactor_1 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:18.769 10:04:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:19.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.029 rmmod nvme_tcp 00:32:19.029 rmmod nvme_fabrics 00:32:19.029 rmmod nvme_keyring 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3151379 ']' 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3151379 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3151379 ']' 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3151379 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.029 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3151379 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3151379' 00:32:19.295 killing process with pid 3151379 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3151379 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3151379 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:19.295 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:19.554 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:19.554 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:19.554 10:04:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.554 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:19.554 10:04:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.460 10:04:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:21.460 00:32:21.460 real 0m23.611s 00:32:21.460 user 0m39.892s 00:32:21.460 sys 0m8.553s 00:32:21.460 10:04:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.460 10:04:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:21.460 ************************************ 00:32:21.460 END TEST nvmf_interrupt 00:32:21.460 ************************************ 00:32:21.460 00:32:21.460 real 27m34.304s 00:32:21.460 user 56m58.884s 00:32:21.460 sys 9m22.180s 00:32:21.460 10:04:44 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:21.460 10:04:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.460 ************************************ 00:32:21.460 END TEST nvmf_tcp 00:32:21.460 ************************************ 00:32:21.460 10:04:44 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:21.460 10:04:44 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:21.460 10:04:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:21.460 10:04:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:21.460 10:04:44 -- common/autotest_common.sh@10 -- # set +x 00:32:21.721 ************************************ 00:32:21.721 START TEST spdkcli_nvmf_tcp 00:32:21.721 ************************************ 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:21.721 * Looking for test storage... 00:32:21.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1703 -- # lcov --version 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:32:21.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.721 --rc genhtml_branch_coverage=1 00:32:21.721 --rc genhtml_function_coverage=1 00:32:21.721 --rc genhtml_legend=1 00:32:21.721 --rc geninfo_all_blocks=1 00:32:21.721 --rc geninfo_unexecuted_blocks=1 00:32:21.721 00:32:21.721 ' 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:32:21.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.721 --rc genhtml_branch_coverage=1 00:32:21.721 --rc genhtml_function_coverage=1 00:32:21.721 --rc genhtml_legend=1 00:32:21.721 --rc geninfo_all_blocks=1 00:32:21.721 --rc geninfo_unexecuted_blocks=1 00:32:21.721 00:32:21.721 ' 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:32:21.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.721 --rc genhtml_branch_coverage=1 00:32:21.721 --rc genhtml_function_coverage=1 00:32:21.721 --rc genhtml_legend=1 00:32:21.721 --rc geninfo_all_blocks=1 00:32:21.721 --rc geninfo_unexecuted_blocks=1 00:32:21.721 00:32:21.721 ' 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:32:21.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.721 --rc genhtml_branch_coverage=1 00:32:21.721 --rc genhtml_function_coverage=1 00:32:21.721 --rc genhtml_legend=1 00:32:21.721 --rc geninfo_all_blocks=1 00:32:21.721 --rc geninfo_unexecuted_blocks=1 00:32:21.721 00:32:21.721 ' 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.721 10:04:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:21.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3154460 00:32:21.721 10:04:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3154460 00:32:21.722 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3154460 ']' 00:32:21.722 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.722 10:04:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:21.722 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.722 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.722 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.722 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.980 [2024-11-20 10:04:45.075751] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:32:21.980 [2024-11-20 10:04:45.075799] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154460 ] 00:32:21.980 [2024-11-20 10:04:45.148600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:21.980 [2024-11-20 10:04:45.192537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.980 [2024-11-20 10:04:45.192540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.980 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.980 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:21.980 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:21.980 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:21.980 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.239 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:22.239 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:22.239 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:22.239 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:22.239 10:04:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.239 10:04:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:22.239 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:22.239 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:22.239 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:22.239 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:22.239 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:22.239 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:22.239 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:22.239 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:22.239 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:22.239 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:22.239 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:22.239 ' 00:32:24.767 [2024-11-20 10:04:48.037414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.137 [2024-11-20 10:04:49.373921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:28.662 [2024-11-20 10:04:51.853682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:31.189 [2024-11-20 10:04:54.044522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:32.561 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:32.561 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:32.561 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:32.561 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:32.561 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:32.561 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:32.561 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:32.561 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:32.561 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:32.561 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:32.561 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:32.561 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:32.561 10:04:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:32.561 10:04:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.561 10:04:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.561 10:04:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:32.561 10:04:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.561 10:04:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.561 10:04:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:32.561 10:04:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:33.124 10:04:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:33.124 10:04:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:33.124 10:04:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:33.124 10:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.124 10:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.124 10:04:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:33.124 10:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:33.124 10:04:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.124 10:04:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:33.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:33.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:33.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:33.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:33.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:33.124 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:33.124 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:33.124 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:33.124 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:33.124 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:33.124 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:33.124 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:33.124 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:33.124 ' 00:32:39.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:39.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:39.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:39.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:39.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:39.679 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:39.679 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:39.679 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:39.679 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:39.679 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:39.679 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:39.679 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:39.679 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:39.679 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:39.679 10:05:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:39.679 10:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.679 10:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.679 10:05:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3154460 00:32:39.679 10:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3154460 ']' 00:32:39.679 10:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3154460 00:32:39.679 10:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:39.679 10:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.679 10:05:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3154460 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3154460' 00:32:39.679 killing process with pid 3154460 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3154460 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3154460 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3154460 ']' 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3154460 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3154460 ']' 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3154460 00:32:39.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3154460) - No such process 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3154460 is not found' 00:32:39.679 Process with pid 3154460 is not found 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:39.679 00:32:39.679 real 0m17.367s 00:32:39.679 user 0m38.319s 00:32:39.679 sys 0m0.811s 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.679 10:05:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.679 ************************************ 00:32:39.679 END TEST spdkcli_nvmf_tcp 00:32:39.679 ************************************ 00:32:39.679 10:05:02 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:39.679 10:05:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:39.679 10:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.679 10:05:02 -- common/autotest_common.sh@10 -- # set +x 00:32:39.679 ************************************ 00:32:39.679 START TEST nvmf_identify_passthru 00:32:39.679 ************************************ 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:39.680 * Looking for test storage... 00:32:39.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1703 -- # lcov --version 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:32:39.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.680 --rc genhtml_branch_coverage=1 00:32:39.680 --rc genhtml_function_coverage=1 00:32:39.680 --rc genhtml_legend=1 00:32:39.680 --rc geninfo_all_blocks=1 00:32:39.680 --rc geninfo_unexecuted_blocks=1 00:32:39.680 00:32:39.680 ' 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:32:39.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.680 --rc genhtml_branch_coverage=1 00:32:39.680 --rc genhtml_function_coverage=1 00:32:39.680 --rc genhtml_legend=1 00:32:39.680 --rc geninfo_all_blocks=1 00:32:39.680 --rc geninfo_unexecuted_blocks=1 00:32:39.680 00:32:39.680 ' 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:32:39.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.680 --rc genhtml_branch_coverage=1 00:32:39.680 --rc genhtml_function_coverage=1 00:32:39.680 --rc genhtml_legend=1 00:32:39.680 --rc geninfo_all_blocks=1 00:32:39.680 --rc geninfo_unexecuted_blocks=1 00:32:39.680 00:32:39.680 ' 00:32:39.680 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:32:39.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.680 --rc genhtml_branch_coverage=1 00:32:39.680 --rc genhtml_function_coverage=1 00:32:39.680 --rc genhtml_legend=1 00:32:39.680 --rc geninfo_all_blocks=1 00:32:39.680 --rc geninfo_unexecuted_blocks=1 00:32:39.680 00:32:39.680 ' 00:32:39.680 10:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.680 10:05:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.680 10:05:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.680 10:05:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.680 10:05:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:39.680 10:05:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:39.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.680 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.680 10:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.680 10:05:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.680 10:05:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.681 10:05:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.681 10:05:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.681 10:05:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:39.681 10:05:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.681 10:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:39.681 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.681 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.681 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.681 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.681 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.681 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.681 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:39.681 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.681 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.681 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.681 10:05:02 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.681 10:05:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:44.961 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:44.962 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:44.962 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:44.962 Found net devices under 0000:86:00.0: cvl_0_0 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:44.962 Found net devices under 0000:86:00.1: cvl_0_1 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:44.962 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.221 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:45.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:32:45.222 00:32:45.222 --- 10.0.0.2 ping statistics --- 00:32:45.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.222 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:32:45.222 00:32:45.222 --- 10.0.0.1 ping statistics --- 00:32:45.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.222 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:45.222 10:05:08 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:45.222 10:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:45.222 10:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:45.222 10:05:08 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:45.222 10:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:45.222 10:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:45.222 10:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:45.222 10:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:45.222 10:05:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:49.412 10:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:49.412 10:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:49.412 10:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:49.412 10:05:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:53.603 10:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:53.603 10:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.603 10:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.603 10:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3161617 00:32:53.603 10:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:53.603 10:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:53.603 10:05:16 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3161617 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3161617 ']' 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.603 10:05:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.603 [2024-11-20 10:05:16.875210] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:32:53.603 [2024-11-20 10:05:16.875257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.860 [2024-11-20 10:05:16.954293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:53.860 [2024-11-20 10:05:16.997661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.860 [2024-11-20 10:05:16.997701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.860 [2024-11-20 10:05:16.997708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.860 [2024-11-20 10:05:16.997715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.860 [2024-11-20 10:05:16.997719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.860 [2024-11-20 10:05:16.999300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.860 [2024-11-20 10:05:16.999410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.860 [2024-11-20 10:05:16.999519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.860 [2024-11-20 10:05:16.999520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.860 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.860 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:53.860 10:05:17 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:53.860 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.860 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.860 INFO: Log level set to 20 00:32:53.860 INFO: Requests: 00:32:53.860 { 00:32:53.860 "jsonrpc": "2.0", 00:32:53.860 "method": "nvmf_set_config", 00:32:53.860 "id": 1, 00:32:53.860 "params": { 00:32:53.860 "admin_cmd_passthru": { 00:32:53.860 "identify_ctrlr": true 00:32:53.860 } 00:32:53.860 } 00:32:53.860 } 00:32:53.860 00:32:53.860 INFO: response: 00:32:53.860 { 00:32:53.860 "jsonrpc": "2.0", 00:32:53.860 "id": 1, 00:32:53.860 "result": true 00:32:53.860 } 00:32:53.860 00:32:53.860 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.861 10:05:17 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.861 INFO: Setting log level to 20 00:32:53.861 INFO: Setting log level to 20 00:32:53.861 INFO: Log level set to 20 00:32:53.861 INFO: Log level set to 20 00:32:53.861 INFO: Requests: 00:32:53.861 { 00:32:53.861 "jsonrpc": "2.0", 00:32:53.861 "method": "framework_start_init", 00:32:53.861 "id": 1 00:32:53.861 } 00:32:53.861 00:32:53.861 INFO: Requests: 00:32:53.861 { 00:32:53.861 "jsonrpc": "2.0", 00:32:53.861 "method": "framework_start_init", 00:32:53.861 "id": 1 00:32:53.861 } 00:32:53.861 00:32:53.861 [2024-11-20 10:05:17.106981] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:53.861 INFO: response: 00:32:53.861 { 00:32:53.861 "jsonrpc": "2.0", 00:32:53.861 "id": 1, 00:32:53.861 "result": true 00:32:53.861 } 00:32:53.861 00:32:53.861 INFO: response: 00:32:53.861 { 00:32:53.861 "jsonrpc": "2.0", 00:32:53.861 "id": 1, 00:32:53.861 "result": true 00:32:53.861 } 00:32:53.861 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.861 10:05:17 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.861 INFO: Setting log level to 40 00:32:53.861 INFO: Setting log level to 40 00:32:53.861 INFO: Setting log level to 40 00:32:53.861 [2024-11-20 10:05:17.120326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.861 10:05:17 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:53.861 10:05:17 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.861 10:05:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.216 Nvme0n1 00:32:57.216 10:05:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.216 10:05:19 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:57.216 10:05:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.216 [2024-11-20 10:05:20.035641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.216 [ 00:32:57.216 { 00:32:57.216 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:57.216 "subtype": "Discovery", 00:32:57.216 "listen_addresses": [], 00:32:57.216 "allow_any_host": true, 00:32:57.216 "hosts": [] 00:32:57.216 }, 00:32:57.216 { 00:32:57.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.216 "subtype": "NVMe", 00:32:57.216 "listen_addresses": [ 00:32:57.216 { 00:32:57.216 "trtype": "TCP", 00:32:57.216 "adrfam": "IPv4", 00:32:57.216 "traddr": "10.0.0.2", 00:32:57.216 "trsvcid": "4420" 00:32:57.216 } 00:32:57.216 ], 00:32:57.216 "allow_any_host": true, 00:32:57.216 "hosts": [], 00:32:57.216 "serial_number": "SPDK00000000000001", 00:32:57.216 "model_number": "SPDK bdev Controller", 00:32:57.216 "max_namespaces": 1, 00:32:57.216 "min_cntlid": 1, 00:32:57.216 "max_cntlid": 65519, 00:32:57.216 "namespaces": [ 00:32:57.216 { 00:32:57.216 "nsid": 1, 00:32:57.216 "bdev_name": "Nvme0n1", 00:32:57.216 "name": "Nvme0n1", 00:32:57.216 "nguid": "2C0492F5C31F4200B8BDDF121D5AF524", 00:32:57.216 "uuid": "2c0492f5-c31f-4200-b8bd-df121d5af524" 00:32:57.216 } 00:32:57.216 ] 00:32:57.216 } 00:32:57.216 ] 00:32:57.216 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:57.216 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:57.502 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:57.502 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:57.502 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:57.502 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.502 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:57.502 10:05:20 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.502 rmmod nvme_tcp 00:32:57.502 rmmod nvme_fabrics 00:32:57.502 rmmod nvme_keyring 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3161617 ']' 00:32:57.502 10:05:20 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3161617 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3161617 ']' 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3161617 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3161617 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3161617' 00:32:57.502 killing process with pid 3161617 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3161617 00:32:57.502 10:05:20 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3161617 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:58.893 10:05:22 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.893 10:05:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:59.152 10:05:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.062 10:05:24 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:01.062 00:33:01.062 real 0m22.044s 00:33:01.062 user 0m27.526s 00:33:01.062 sys 0m6.221s 00:33:01.062 10:05:24 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.062 10:05:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 ************************************ 00:33:01.062 END TEST nvmf_identify_passthru 00:33:01.062 ************************************ 00:33:01.062 10:05:24 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:01.062 10:05:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:01.062 10:05:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.062 10:05:24 -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 ************************************ 00:33:01.062 START TEST nvmf_dif 00:33:01.062 ************************************ 00:33:01.062 10:05:24 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:01.322 * Looking for test storage... 00:33:01.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:01.322 10:05:24 nvmf_dif -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:33:01.322 10:05:24 nvmf_dif -- common/autotest_common.sh@1703 -- # lcov --version 00:33:01.322 10:05:24 nvmf_dif -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:33:01.322 10:05:24 nvmf_dif -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.322 10:05:24 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:01.322 10:05:24 nvmf_dif -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.323 10:05:24 nvmf_dif -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:33:01.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.323 --rc genhtml_branch_coverage=1 00:33:01.323 --rc genhtml_function_coverage=1 00:33:01.323 --rc genhtml_legend=1 00:33:01.323 --rc geninfo_all_blocks=1 00:33:01.323 --rc geninfo_unexecuted_blocks=1 00:33:01.323 00:33:01.323 ' 00:33:01.323 10:05:24 nvmf_dif -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:33:01.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.323 --rc genhtml_branch_coverage=1 00:33:01.323 --rc genhtml_function_coverage=1 00:33:01.323 --rc genhtml_legend=1 00:33:01.323 --rc geninfo_all_blocks=1 00:33:01.323 --rc geninfo_unexecuted_blocks=1 00:33:01.323 00:33:01.323 ' 00:33:01.323 10:05:24 nvmf_dif -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:33:01.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.323 --rc genhtml_branch_coverage=1 00:33:01.323 --rc genhtml_function_coverage=1 00:33:01.323 --rc genhtml_legend=1 00:33:01.323 --rc geninfo_all_blocks=1 00:33:01.323 --rc geninfo_unexecuted_blocks=1 00:33:01.323 00:33:01.323 ' 00:33:01.323 10:05:24 nvmf_dif -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:33:01.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.323 --rc genhtml_branch_coverage=1 00:33:01.323 --rc genhtml_function_coverage=1 00:33:01.323 --rc genhtml_legend=1 00:33:01.323 --rc geninfo_all_blocks=1 00:33:01.323 --rc geninfo_unexecuted_blocks=1 00:33:01.323 00:33:01.323 ' 00:33:01.323 10:05:24 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.323 10:05:24 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.323 10:05:24 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.323 10:05:24 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.323 10:05:24 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.323 10:05:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.323 10:05:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.323 10:05:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.323 10:05:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:01.323 10:05:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:01.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.323 10:05:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:01.323 10:05:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:01.323 10:05:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:01.323 10:05:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:01.323 10:05:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.323 10:05:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:01.323 10:05:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:01.323 10:05:24 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.323 10:05:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:07.897 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:07.897 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:07.897 Found net devices under 0000:86:00.0: cvl_0_0 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:07.897 10:05:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:07.898 Found net devices under 0000:86:00.1: cvl_0_1 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:07.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:07.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:33:07.898 00:33:07.898 --- 10.0.0.2 ping statistics --- 00:33:07.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.898 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:07.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:07.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:33:07.898 00:33:07.898 --- 10.0.0.1 ping statistics --- 00:33:07.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:07.898 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:07.898 10:05:30 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:09.805 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:09.805 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:09.805 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:09.805 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:09.805 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:09.805 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.064 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.064 10:05:33 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:10.064 10:05:33 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.064 10:05:33 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.064 10:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3167281 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:10.064 10:05:33 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3167281 00:33:10.064 10:05:33 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3167281 ']' 00:33:10.064 10:05:33 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.064 10:05:33 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.064 10:05:33 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.064 10:05:33 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.064 10:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.323 [2024-11-20 10:05:33.401278] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:33:10.323 [2024-11-20 10:05:33.401323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.323 [2024-11-20 10:05:33.465027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.323 [2024-11-20 10:05:33.506361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.323 [2024-11-20 10:05:33.506401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.323 [2024-11-20 10:05:33.506409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.323 [2024-11-20 10:05:33.506415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.323 [2024-11-20 10:05:33.506420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.323 [2024-11-20 10:05:33.506996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.323 10:05:33 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.323 10:05:33 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:10.323 10:05:33 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.323 10:05:33 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.323 10:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.323 10:05:33 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.323 10:05:33 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:10.323 10:05:33 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:10.323 10:05:33 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.323 10:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.323 [2024-11-20 10:05:33.650460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.583 10:05:33 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.583 10:05:33 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:10.583 10:05:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:10.583 10:05:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:10.583 10:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.583 ************************************ 00:33:10.583 START TEST fio_dif_1_default 00:33:10.583 ************************************ 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:10.583 bdev_null0 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:10.583 [2024-11-20 10:05:33.722789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:10.583 10:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:10.583 { 00:33:10.583 "params": { 00:33:10.583 "name": "Nvme$subsystem", 00:33:10.583 "trtype": "$TEST_TRANSPORT", 00:33:10.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.584 "adrfam": "ipv4", 00:33:10.584 "trsvcid": "$NVMF_PORT", 00:33:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.584 "hdgst": ${hdgst:-false}, 00:33:10.584 "ddgst": ${ddgst:-false} 00:33:10.584 }, 00:33:10.584 "method": "bdev_nvme_attach_controller" 00:33:10.584 } 00:33:10.584 EOF 00:33:10.584 )") 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:10.584 "params": { 00:33:10.584 "name": "Nvme0", 00:33:10.584 "trtype": "tcp", 00:33:10.584 "traddr": "10.0.0.2", 00:33:10.584 "adrfam": "ipv4", 00:33:10.584 "trsvcid": "4420", 00:33:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:10.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:10.584 "hdgst": false, 00:33:10.584 "ddgst": false 00:33:10.584 }, 00:33:10.584 "method": "bdev_nvme_attach_controller" 00:33:10.584 }' 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:10.584 10:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.843 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:10.843 fio-3.35 00:33:10.843 Starting 1 thread 00:33:23.051 00:33:23.051 filename0: (groupid=0, jobs=1): err= 0: pid=3167571: Wed Nov 20 10:05:44 2024 00:33:23.051 read: IOPS=211, BW=847KiB/s (868kB/s)(8496KiB/10027msec) 00:33:23.051 slat (nsec): min=5543, max=25481, avg=6230.60, stdev=1117.52 00:33:23.051 clat (usec): min=378, max=44830, avg=18865.76, stdev=20440.52 00:33:23.051 lat (usec): min=384, max=44856, avg=18871.99, stdev=20440.45 00:33:23.051 clat percentiles (usec): 00:33:23.051 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 416], 20.00th=[ 474], 00:33:23.051 | 30.00th=[ 553], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[40633], 00:33:23.051 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:23.051 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:33:23.051 | 99.99th=[44827] 00:33:23.051 bw ( KiB/s): min= 704, max= 960, per=100.00%, avg=848.00, stdev=77.35, samples=20 00:33:23.051 iops : min= 176, max= 240, avg=212.00, stdev=19.34, samples=20 00:33:23.051 lat (usec) : 500=28.30%, 750=27.07% 00:33:23.051 lat (msec) : 50=44.63% 00:33:23.051 cpu : usr=92.48%, sys=7.28%, ctx=13, majf=0, minf=0 00:33:23.051 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.051 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.051 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:23.051 00:33:23.051 Run status group 0 (all jobs): 00:33:23.051 READ: bw=847KiB/s (868kB/s), 847KiB/s-847KiB/s (868kB/s-868kB/s), io=8496KiB (8700kB), run=10027-10027msec 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.051 00:33:23.051 real 0m11.260s 00:33:23.051 user 0m16.004s 00:33:23.051 sys 0m1.028s 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.051 10:05:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 ************************************ 00:33:23.051 END TEST fio_dif_1_default 00:33:23.051 ************************************ 00:33:23.051 10:05:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:23.051 10:05:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:23.051 10:05:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.051 10:05:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 ************************************ 00:33:23.051 START TEST fio_dif_1_multi_subsystems 00:33:23.051 ************************************ 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 bdev_null0 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 [2024-11-20 10:05:45.052464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 bdev_null1 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.051 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.052 { 00:33:23.052 "params": { 00:33:23.052 "name": "Nvme$subsystem", 00:33:23.052 "trtype": "$TEST_TRANSPORT", 00:33:23.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.052 "adrfam": "ipv4", 00:33:23.052 "trsvcid": "$NVMF_PORT", 00:33:23.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.052 "hdgst": ${hdgst:-false}, 00:33:23.052 "ddgst": ${ddgst:-false} 00:33:23.052 }, 00:33:23.052 "method": "bdev_nvme_attach_controller" 00:33:23.052 } 00:33:23.052 EOF 00:33:23.052 )") 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.052 { 00:33:23.052 "params": { 00:33:23.052 "name": "Nvme$subsystem", 00:33:23.052 "trtype": "$TEST_TRANSPORT", 00:33:23.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.052 "adrfam": "ipv4", 00:33:23.052 "trsvcid": "$NVMF_PORT", 00:33:23.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.052 "hdgst": ${hdgst:-false}, 00:33:23.052 "ddgst": ${ddgst:-false} 00:33:23.052 }, 00:33:23.052 "method": "bdev_nvme_attach_controller" 00:33:23.052 } 00:33:23.052 EOF 00:33:23.052 )") 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:23.052 "params": { 00:33:23.052 "name": "Nvme0", 00:33:23.052 "trtype": "tcp", 00:33:23.052 "traddr": "10.0.0.2", 00:33:23.052 "adrfam": "ipv4", 00:33:23.052 "trsvcid": "4420", 00:33:23.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:23.052 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:23.052 "hdgst": false, 00:33:23.052 "ddgst": false 00:33:23.052 }, 00:33:23.052 "method": "bdev_nvme_attach_controller" 00:33:23.052 },{ 00:33:23.052 "params": { 00:33:23.052 "name": "Nvme1", 00:33:23.052 "trtype": "tcp", 00:33:23.052 "traddr": "10.0.0.2", 00:33:23.052 "adrfam": "ipv4", 00:33:23.052 "trsvcid": "4420", 00:33:23.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.052 "hdgst": false, 00:33:23.052 "ddgst": false 00:33:23.052 }, 00:33:23.052 "method": "bdev_nvme_attach_controller" 00:33:23.052 }' 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:23.052 10:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.052 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.052 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.052 fio-3.35 00:33:23.052 Starting 2 threads 00:33:33.029 00:33:33.029 filename0: (groupid=0, jobs=1): err= 0: pid=3169460: Wed Nov 20 10:05:56 2024 00:33:33.029 read: IOPS=98, BW=395KiB/s (404kB/s)(3952KiB/10010msec) 00:33:33.029 slat (nsec): min=6021, max=26912, avg=7692.02, stdev=2480.89 00:33:33.029 clat (usec): min=383, max=42347, avg=40501.27, stdev=4444.01 00:33:33.029 lat (usec): min=389, max=42374, avg=40508.96, stdev=4444.05 00:33:33.029 clat percentiles (usec): 00:33:33.029 | 1.00th=[ 611], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:33.029 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:33.029 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:33.029 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:33.029 | 99.99th=[42206] 00:33:33.029 bw ( KiB/s): min= 384, max= 416, per=49.87%, avg=393.60, stdev=15.05, samples=20 00:33:33.029 iops : min= 96, max= 104, avg=98.40, stdev= 3.76, samples=20 00:33:33.029 lat (usec) : 500=0.81%, 750=0.40% 00:33:33.029 lat (msec) : 50=98.79% 00:33:33.029 cpu : usr=96.91%, sys=2.85%, ctx=8, majf=0, minf=9 00:33:33.029 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.029 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.029 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.029 filename1: (groupid=0, jobs=1): err= 0: pid=3169461: Wed Nov 20 10:05:56 2024 00:33:33.029 read: IOPS=98, BW=393KiB/s (403kB/s)(3936KiB/10009msec) 00:33:33.029 slat (nsec): min=5970, max=26986, avg=7617.32, stdev=2430.34 00:33:33.029 clat (usec): min=486, max=42481, avg=40662.56, stdev=3640.69 00:33:33.029 lat (usec): min=492, max=42487, avg=40670.18, stdev=3640.70 00:33:33.029 clat percentiles (usec): 00:33:33.029 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:33.029 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:33.029 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:33.029 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:33:33.029 | 99.99th=[42730] 00:33:33.029 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=392.00, stdev=14.22, samples=20 00:33:33.029 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:33:33.029 lat (usec) : 500=0.61%, 750=0.20% 00:33:33.029 lat (msec) : 50=99.19% 00:33:33.029 cpu : usr=96.76%, sys=3.00%, ctx=11, majf=0, minf=9 00:33:33.029 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.029 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.029 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.029 00:33:33.029 Run status group 0 (all jobs): 00:33:33.029 READ: bw=788KiB/s (807kB/s), 393KiB/s-395KiB/s (403kB/s-404kB/s), io=7888KiB (8077kB), run=10009-10010msec 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.289 00:33:33.289 real 0m11.401s 00:33:33.289 user 0m25.598s 00:33:33.289 sys 0m0.890s 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 ************************************ 00:33:33.289 END TEST fio_dif_1_multi_subsystems 00:33:33.289 ************************************ 00:33:33.289 10:05:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:33.289 10:05:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:33.289 10:05:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 ************************************ 00:33:33.289 START TEST fio_dif_rand_params 00:33:33.289 ************************************ 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 bdev_null0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.289 [2024-11-20 10:05:56.524683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:33.289 10:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.289 { 00:33:33.289 "params": { 00:33:33.289 "name": "Nvme$subsystem", 00:33:33.289 "trtype": "$TEST_TRANSPORT", 00:33:33.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.289 "adrfam": "ipv4", 00:33:33.289 "trsvcid": "$NVMF_PORT", 00:33:33.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.290 "hdgst": ${hdgst:-false}, 00:33:33.290 "ddgst": ${ddgst:-false} 00:33:33.290 }, 00:33:33.290 "method": "bdev_nvme_attach_controller" 00:33:33.290 } 00:33:33.290 EOF 00:33:33.290 )") 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:33.290 "params": { 00:33:33.290 "name": "Nvme0", 00:33:33.290 "trtype": "tcp", 00:33:33.290 "traddr": "10.0.0.2", 00:33:33.290 "adrfam": "ipv4", 00:33:33.290 "trsvcid": "4420", 00:33:33.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:33.290 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:33.290 "hdgst": false, 00:33:33.290 "ddgst": false 00:33:33.290 }, 00:33:33.290 "method": "bdev_nvme_attach_controller" 00:33:33.290 }' 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:33.290 10:05:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.861 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:33.861 ... 00:33:33.861 fio-3.35 00:33:33.861 Starting 3 threads 00:33:39.130 00:33:39.130 filename0: (groupid=0, jobs=1): err= 0: pid=3171365: Wed Nov 20 10:06:02 2024 00:33:39.130 read: IOPS=321, BW=40.2MiB/s (42.1MB/s)(201MiB/5008msec) 00:33:39.130 slat (usec): min=6, max=102, avg=13.20, stdev= 6.60 00:33:39.130 clat (usec): min=3257, max=91812, avg=9318.06, stdev=6373.47 00:33:39.130 lat (usec): min=3264, max=91824, avg=9331.26, stdev=6373.79 00:33:39.130 clat percentiles (usec): 00:33:39.130 | 1.00th=[ 3818], 5.00th=[ 6521], 10.00th=[ 7177], 20.00th=[ 7767], 00:33:39.130 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:33:39.130 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10290], 00:33:39.130 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51643], 99.95th=[91751], 00:33:39.130 | 99.99th=[91751] 00:33:39.130 bw ( KiB/s): min=27392, max=50432, per=33.94%, avg=41113.60, stdev=7049.38, samples=10 00:33:39.130 iops : min= 214, max= 394, avg=321.20, stdev=55.07, samples=10 00:33:39.130 lat (msec) : 4=1.37%, 10=91.55%, 20=4.91%, 50=1.18%, 100=0.99% 00:33:39.130 cpu : usr=96.05%, sys=3.61%, ctx=18, majf=0, minf=89 00:33:39.130 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.130 issued rwts: total=1609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.130 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.130 filename0: (groupid=0, jobs=1): err= 0: pid=3171366: Wed Nov 20 10:06:02 2024 00:33:39.130 read: IOPS=318, BW=39.9MiB/s (41.8MB/s)(201MiB/5046msec) 00:33:39.130 slat (nsec): min=6240, max=52749, avg=11775.35, stdev=3833.36 00:33:39.130 clat (usec): min=5067, max=53009, avg=9366.98, stdev=4007.01 00:33:39.130 lat (usec): min=5074, max=53021, avg=9378.76, stdev=4007.27 00:33:39.130 clat percentiles (usec): 00:33:39.130 | 1.00th=[ 5604], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 7963], 00:33:39.130 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:33:39.130 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11338], 00:33:39.130 | 99.00th=[12518], 99.50th=[47973], 99.90th=[52167], 99.95th=[53216], 00:33:39.130 | 99.99th=[53216] 00:33:39.130 bw ( KiB/s): min=35072, max=47104, per=33.94%, avg=41113.60, stdev=3578.31, samples=10 00:33:39.130 iops : min= 274, max= 368, avg=321.20, stdev=27.96, samples=10 00:33:39.130 lat (msec) : 10=74.46%, 20=24.67%, 50=0.50%, 100=0.37% 00:33:39.130 cpu : usr=95.82%, sys=3.85%, ctx=6, majf=0, minf=77 00:33:39.130 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.130 issued rwts: total=1609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.130 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.130 filename0: (groupid=0, jobs=1): err= 0: pid=3171367: Wed Nov 20 10:06:02 2024 00:33:39.130 read: IOPS=311, BW=38.9MiB/s (40.8MB/s)(195MiB/5004msec) 00:33:39.130 slat (nsec): min=6262, max=41934, avg=11716.66, stdev=3927.64 00:33:39.130 clat (usec): min=3222, max=51375, avg=9621.72, stdev=3660.45 00:33:39.130 lat (usec): min=3229, max=51387, avg=9633.44, stdev=3660.72 00:33:39.130 clat percentiles (usec): 00:33:39.130 | 1.00th=[ 3490], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 8094], 00:33:39.130 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:33:39.130 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11600], 95.00th=[11994], 00:33:39.130 | 99.00th=[12780], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:33:39.130 | 99.99th=[51119] 00:33:39.130 bw ( KiB/s): min=31744, max=50432, per=32.86%, avg=39808.00, stdev=5135.98, samples=10 00:33:39.130 iops : min= 248, max= 394, avg=311.00, stdev=40.12, samples=10 00:33:39.130 lat (msec) : 4=3.27%, 10=52.31%, 20=43.84%, 50=0.13%, 100=0.45% 00:33:39.130 cpu : usr=96.14%, sys=3.56%, ctx=7, majf=0, minf=24 00:33:39.130 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.130 issued rwts: total=1558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.130 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:39.130 00:33:39.130 Run status group 0 (all jobs): 00:33:39.130 READ: bw=118MiB/s (124MB/s), 38.9MiB/s-40.2MiB/s (40.8MB/s-42.1MB/s), io=597MiB (626MB), run=5004-5046msec 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 bdev_null0 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 [2024-11-20 10:06:02.638402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 bdev_null1 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.391 bdev_null2 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.391 { 00:33:39.391 "params": { 00:33:39.391 "name": "Nvme$subsystem", 00:33:39.391 "trtype": "$TEST_TRANSPORT", 00:33:39.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.391 "adrfam": "ipv4", 00:33:39.391 "trsvcid": "$NVMF_PORT", 00:33:39.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.391 "hdgst": ${hdgst:-false}, 00:33:39.391 "ddgst": ${ddgst:-false} 00:33:39.391 }, 00:33:39.391 "method": "bdev_nvme_attach_controller" 00:33:39.391 } 00:33:39.391 EOF 00:33:39.391 )") 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.391 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.651 { 00:33:39.651 "params": { 00:33:39.651 "name": "Nvme$subsystem", 00:33:39.651 "trtype": "$TEST_TRANSPORT", 00:33:39.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.651 "adrfam": "ipv4", 00:33:39.651 "trsvcid": "$NVMF_PORT", 00:33:39.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.651 "hdgst": ${hdgst:-false}, 00:33:39.651 "ddgst": ${ddgst:-false} 00:33:39.651 }, 00:33:39.651 "method": "bdev_nvme_attach_controller" 00:33:39.651 } 00:33:39.651 EOF 00:33:39.651 )") 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:39.651 { 00:33:39.651 "params": { 00:33:39.651 "name": "Nvme$subsystem", 00:33:39.651 "trtype": "$TEST_TRANSPORT", 00:33:39.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.651 "adrfam": "ipv4", 00:33:39.651 "trsvcid": "$NVMF_PORT", 00:33:39.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.651 "hdgst": ${hdgst:-false}, 00:33:39.651 "ddgst": ${ddgst:-false} 00:33:39.651 }, 00:33:39.651 "method": "bdev_nvme_attach_controller" 00:33:39.651 } 00:33:39.651 EOF 00:33:39.651 )") 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:39.651 "params": { 00:33:39.651 "name": "Nvme0", 00:33:39.651 "trtype": "tcp", 00:33:39.651 "traddr": "10.0.0.2", 00:33:39.651 "adrfam": "ipv4", 00:33:39.651 "trsvcid": "4420", 00:33:39.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.651 "hdgst": false, 00:33:39.651 "ddgst": false 00:33:39.651 }, 00:33:39.651 "method": "bdev_nvme_attach_controller" 00:33:39.651 },{ 00:33:39.651 "params": { 00:33:39.651 "name": "Nvme1", 00:33:39.651 "trtype": "tcp", 00:33:39.651 "traddr": "10.0.0.2", 00:33:39.651 "adrfam": "ipv4", 00:33:39.651 "trsvcid": "4420", 00:33:39.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:39.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:39.651 "hdgst": false, 00:33:39.651 "ddgst": false 00:33:39.651 }, 00:33:39.651 "method": "bdev_nvme_attach_controller" 00:33:39.651 },{ 00:33:39.651 "params": { 00:33:39.651 "name": "Nvme2", 00:33:39.651 "trtype": "tcp", 00:33:39.651 "traddr": "10.0.0.2", 00:33:39.651 "adrfam": "ipv4", 00:33:39.651 "trsvcid": "4420", 00:33:39.651 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:39.651 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:39.651 "hdgst": false, 00:33:39.651 "ddgst": false 00:33:39.651 }, 00:33:39.651 "method": "bdev_nvme_attach_controller" 00:33:39.651 }' 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:39.651 10:06:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:39.911 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:39.911 ... 00:33:39.911 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:39.911 ... 00:33:39.911 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:39.911 ... 00:33:39.911 fio-3.35 00:33:39.911 Starting 24 threads 00:33:52.116 00:33:52.116 filename0: (groupid=0, jobs=1): err= 0: pid=3172687: Wed Nov 20 10:06:14 2024 00:33:52.116 read: IOPS=628, BW=2513KiB/s (2574kB/s)(24.6MiB/10007msec) 00:33:52.116 slat (nsec): min=6212, max=95395, avg=39767.61, stdev=20714.16 00:33:52.116 clat (usec): min=1176, max=29273, avg=25157.98, stdev=5048.99 00:33:52.116 lat (usec): min=1184, max=29289, avg=25197.74, stdev=5054.59 00:33:52.116 clat percentiles (usec): 00:33:52.116 | 1.00th=[ 1287], 5.00th=[17433], 10.00th=[25297], 20.00th=[25560], 00:33:52.116 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:33:52.116 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[28181], 00:33:52.116 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:33:52.116 | 99.99th=[29230] 00:33:52.116 bw ( KiB/s): min= 2176, max= 4480, per=4.43%, avg=2519.58, stdev=488.44, samples=19 00:33:52.116 iops : min= 544, max= 1120, avg=629.89, stdev=122.11, samples=19 00:33:52.116 lat (msec) : 2=3.28%, 4=0.52%, 10=0.40%, 20=0.89%, 50=94.91% 00:33:52.116 cpu : usr=98.55%, sys=1.00%, ctx=43, majf=0, minf=46 00:33:52.116 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:52.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.116 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.116 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.116 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.116 filename0: (groupid=0, jobs=1): err= 0: pid=3172688: Wed Nov 20 10:06:14 2024 00:33:52.116 read: IOPS=598, BW=2392KiB/s (2450kB/s)(23.4MiB/10032msec) 00:33:52.116 slat (nsec): min=5438, max=36845, avg=8623.57, stdev=2619.72 00:33:52.116 clat (usec): min=18307, max=90727, avg=26666.49, stdev=3468.42 00:33:52.116 lat (usec): min=18314, max=90740, avg=26675.12, stdev=3468.50 00:33:52.116 clat percentiles (usec): 00:33:52.116 | 1.00th=[25560], 5.00th=[25560], 10.00th=[25560], 20.00th=[25560], 00:33:52.116 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:33:52.116 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28181], 95.00th=[28443], 00:33:52.116 | 99.00th=[28967], 99.50th=[31065], 99.90th=[90702], 99.95th=[90702], 00:33:52.116 | 99.99th=[90702] 00:33:52.116 bw ( KiB/s): min= 2176, max= 2560, per=4.21%, avg=2394.20, stdev=84.07, samples=20 00:33:52.117 iops : min= 544, max= 640, avg=598.55, stdev=21.02, samples=20 00:33:52.117 lat (msec) : 20=0.27%, 50=99.47%, 100=0.27% 00:33:52.117 cpu : usr=98.18%, sys=1.03%, ctx=208, majf=0, minf=31 00:33:52.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.117 filename0: (groupid=0, jobs=1): err= 0: pid=3172689: Wed Nov 20 10:06:14 2024 00:33:52.117 read: IOPS=591, BW=2368KiB/s (2425kB/s)(23.6MiB/10217msec) 00:33:52.117 slat (usec): min=7, max=107, avg=52.81, stdev=13.65 00:33:52.117 clat (msec): min=9, max=236, avg=26.58, stdev=10.90 00:33:52.117 lat (msec): min=9, max=236, avg=26.64, stdev=10.90 00:33:52.117 clat percentiles (msec): 00:33:52.117 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.117 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.117 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.117 | 99.00th=[ 29], 99.50th=[ 29], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.117 | 99.99th=[ 236] 00:33:52.117 bw ( KiB/s): min= 2176, max= 2560, per=4.25%, avg=2412.80, stdev=119.46, samples=20 00:33:52.117 iops : min= 544, max= 640, avg=603.20, stdev=29.87, samples=20 00:33:52.117 lat (msec) : 10=0.23%, 20=0.64%, 50=98.86%, 250=0.26% 00:33:52.117 cpu : usr=98.73%, sys=0.91%, ctx=22, majf=0, minf=40 00:33:52.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.117 filename0: (groupid=0, jobs=1): err= 0: pid=3172691: Wed Nov 20 10:06:14 2024 00:33:52.117 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.7MiB/10235msec) 00:33:52.117 slat (nsec): min=6103, max=94205, avg=51653.05, stdev=14221.08 00:33:52.117 clat (msec): min=7, max=236, avg=26.55, stdev=10.91 00:33:52.117 lat (msec): min=7, max=236, avg=26.60, stdev=10.91 00:33:52.117 clat percentiles (msec): 00:33:52.117 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.117 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.117 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.117 | 99.00th=[ 29], 99.50th=[ 29], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.117 | 99.99th=[ 236] 00:33:52.117 bw ( KiB/s): min= 2304, max= 2744, per=4.26%, avg=2417.75, stdev=102.29, samples=20 00:33:52.117 iops : min= 576, max= 686, avg=604.40, stdev=25.60, samples=20 00:33:52.117 lat (msec) : 10=0.40%, 20=0.77%, 50=98.57%, 250=0.26% 00:33:52.117 cpu : usr=98.78%, sys=0.85%, ctx=14, majf=0, minf=28 00:33:52.117 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 issued rwts: total=6071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.117 filename0: (groupid=0, jobs=1): err= 0: pid=3172692: Wed Nov 20 10:06:14 2024 00:33:52.117 read: IOPS=589, BW=2358KiB/s (2414kB/s)(23.5MiB/10206msec) 00:33:52.117 slat (usec): min=3, max=109, avg=55.87, stdev=13.21 00:33:52.117 clat (msec): min=17, max=237, avg=26.67, stdev=10.88 00:33:52.117 lat (msec): min=17, max=237, avg=26.72, stdev=10.88 00:33:52.117 clat percentiles (msec): 00:33:52.117 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.117 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.117 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.117 | 99.00th=[ 29], 99.50th=[ 29], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.117 | 99.99th=[ 239] 00:33:52.117 bw ( KiB/s): min= 2281, max= 2560, per=4.22%, avg=2398.85, stdev=72.23, samples=20 00:33:52.117 iops : min= 570, max= 640, avg=599.70, stdev=18.08, samples=20 00:33:52.117 lat (msec) : 20=0.27%, 50=99.47%, 250=0.27% 00:33:52.117 cpu : usr=97.77%, sys=1.33%, ctx=249, majf=0, minf=22 00:33:52.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.117 filename0: (groupid=0, jobs=1): err= 0: pid=3172693: Wed Nov 20 10:06:14 2024 00:33:52.117 read: IOPS=592, BW=2368KiB/s (2425kB/s)(23.6MiB/10216msec) 00:33:52.117 slat (nsec): min=6507, max=86557, avg=40157.21, stdev=18026.23 00:33:52.117 clat (msec): min=9, max=238, avg=26.71, stdev=10.90 00:33:52.117 lat (msec): min=9, max=238, avg=26.75, stdev=10.90 00:33:52.117 clat percentiles (msec): 00:33:52.117 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.117 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 27], 00:33:52.117 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.117 | 99.00th=[ 29], 99.50th=[ 29], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.117 | 99.99th=[ 239] 00:33:52.117 bw ( KiB/s): min= 2176, max= 2560, per=4.25%, avg=2412.80, stdev=112.01, samples=20 00:33:52.117 iops : min= 544, max= 640, avg=603.20, stdev=28.00, samples=20 00:33:52.117 lat (msec) : 10=0.03%, 20=0.99%, 50=98.71%, 250=0.26% 00:33:52.117 cpu : usr=98.96%, sys=0.69%, ctx=17, majf=0, minf=30 00:33:52.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.117 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.117 filename0: (groupid=0, jobs=1): err= 0: pid=3172694: Wed Nov 20 10:06:14 2024 00:33:52.117 read: IOPS=588, BW=2355KiB/s (2412kB/s)(23.4MiB/10191msec) 00:33:52.117 slat (usec): min=5, max=101, avg=50.97, stdev=11.99 00:33:52.117 clat (msec): min=24, max=236, avg=26.74, stdev=10.89 00:33:52.117 lat (msec): min=24, max=236, avg=26.79, stdev=10.89 00:33:52.117 clat percentiles (msec): 00:33:52.117 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.117 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.117 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.117 | 99.00th=[ 29], 99.50th=[ 35], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.117 | 99.99th=[ 236] 00:33:52.117 bw ( KiB/s): min= 2176, max= 2560, per=4.21%, avg=2394.00, stdev=117.88, samples=20 00:33:52.117 iops : min= 544, max= 640, avg=598.50, stdev=29.47, samples=20 00:33:52.117 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.117 cpu : usr=98.95%, sys=0.70%, ctx=13, majf=0, minf=29 00:33:52.117 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.117 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.118 filename0: (groupid=0, jobs=1): err= 0: pid=3172695: Wed Nov 20 10:06:14 2024 00:33:52.118 read: IOPS=587, BW=2352KiB/s (2408kB/s)(23.4MiB/10178msec) 00:33:52.118 slat (usec): min=4, max=103, avg=49.78, stdev=12.80 00:33:52.118 clat (msec): min=24, max=236, avg=26.78, stdev=10.95 00:33:52.118 lat (msec): min=24, max=236, avg=26.83, stdev=10.95 00:33:52.118 clat percentiles (msec): 00:33:52.118 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.118 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.118 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.118 | 99.00th=[ 29], 99.50th=[ 48], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.118 | 99.99th=[ 236] 00:33:52.118 bw ( KiB/s): min= 2176, max= 2432, per=4.20%, avg=2387.20, stdev=75.15, samples=20 00:33:52.118 iops : min= 544, max= 608, avg=596.80, stdev=18.79, samples=20 00:33:52.118 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.118 cpu : usr=98.79%, sys=0.83%, ctx=27, majf=0, minf=35 00:33:52.118 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.118 filename1: (groupid=0, jobs=1): err= 0: pid=3172696: Wed Nov 20 10:06:14 2024 00:33:52.118 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.4MiB/10181msec) 00:33:52.118 slat (usec): min=7, max=131, avg=53.70, stdev=12.94 00:33:52.118 clat (msec): min=24, max=236, avg=26.74, stdev=10.96 00:33:52.118 lat (msec): min=24, max=236, avg=26.80, stdev=10.96 00:33:52.118 clat percentiles (msec): 00:33:52.118 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.118 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.118 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.118 | 99.00th=[ 29], 99.50th=[ 48], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.118 | 99.99th=[ 236] 00:33:52.118 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2387.20, stdev=85.87, samples=20 00:33:52.118 iops : min= 544, max= 640, avg=596.80, stdev=21.47, samples=20 00:33:52.118 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.118 cpu : usr=98.78%, sys=0.84%, ctx=20, majf=0, minf=38 00:33:52.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.118 filename1: (groupid=0, jobs=1): err= 0: pid=3172697: Wed Nov 20 10:06:14 2024 00:33:52.118 read: IOPS=592, BW=2369KiB/s (2426kB/s)(23.6MiB/10213msec) 00:33:52.118 slat (nsec): min=6266, max=91927, avg=23819.80, stdev=18341.40 00:33:52.118 clat (msec): min=9, max=235, avg=26.84, stdev=10.86 00:33:52.118 lat (msec): min=9, max=235, avg=26.87, stdev=10.86 00:33:52.118 clat percentiles (msec): 00:33:52.118 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.118 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 27], 00:33:52.118 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 29], 95.00th=[ 29], 00:33:52.118 | 99.00th=[ 29], 99.50th=[ 29], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.118 | 99.99th=[ 236] 00:33:52.118 bw ( KiB/s): min= 2176, max= 2560, per=4.25%, avg=2412.80, stdev=112.01, samples=20 00:33:52.118 iops : min= 544, max= 640, avg=603.20, stdev=28.00, samples=20 00:33:52.118 lat (msec) : 10=0.03%, 20=1.03%, 50=98.68%, 250=0.26% 00:33:52.118 cpu : usr=98.44%, sys=0.92%, ctx=124, majf=0, minf=41 00:33:52.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.118 filename1: (groupid=0, jobs=1): err= 0: pid=3172698: Wed Nov 20 10:06:14 2024 00:33:52.118 read: IOPS=587, BW=2351KiB/s (2408kB/s)(23.4MiB/10180msec) 00:33:52.118 slat (usec): min=4, max=107, avg=55.11, stdev=13.40 00:33:52.118 clat (msec): min=24, max=236, avg=26.73, stdev=10.95 00:33:52.118 lat (msec): min=24, max=236, avg=26.78, stdev=10.95 00:33:52.118 clat percentiles (msec): 00:33:52.118 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.118 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.118 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.118 | 99.00th=[ 29], 99.50th=[ 48], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.118 | 99.99th=[ 236] 00:33:52.118 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2387.40, stdev=85.67, samples=20 00:33:52.118 iops : min= 544, max= 640, avg=596.85, stdev=21.42, samples=20 00:33:52.118 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.118 cpu : usr=98.13%, sys=1.10%, ctx=153, majf=0, minf=37 00:33:52.118 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.118 filename1: (groupid=0, jobs=1): err= 0: pid=3172699: Wed Nov 20 10:06:14 2024 00:33:52.118 read: IOPS=661, BW=2644KiB/s (2708kB/s)(26.3MiB/10181msec) 00:33:52.118 slat (usec): min=4, max=103, avg=17.09, stdev=17.57 00:33:52.118 clat (msec): min=6, max=248, avg=23.99, stdev= 9.98 00:33:52.118 lat (msec): min=6, max=248, avg=24.01, stdev= 9.98 00:33:52.118 clat percentiles (msec): 00:33:52.118 | 1.00th=[ 15], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:33:52.118 | 30.00th=[ 22], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 26], 00:33:52.118 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 29], 95.00th=[ 29], 00:33:52.118 | 99.00th=[ 35], 99.50th=[ 38], 99.90th=[ 239], 99.95th=[ 249], 00:33:52.118 | 99.99th=[ 249] 00:33:52.118 bw ( KiB/s): min= 2272, max= 3248, per=4.73%, avg=2685.60, stdev=220.93, samples=20 00:33:52.118 iops : min= 568, max= 812, avg=671.40, stdev=55.23, samples=20 00:33:52.118 lat (msec) : 10=0.83%, 20=23.14%, 50=75.77%, 100=0.12%, 250=0.15% 00:33:52.118 cpu : usr=98.07%, sys=1.18%, ctx=172, majf=0, minf=36 00:33:52.118 IO depths : 1=0.1%, 2=0.1%, 4=2.5%, 8=81.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:33:52.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 complete : 0=0.0%, 4=89.2%, 8=8.7%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.118 issued rwts: total=6730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.118 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.118 filename1: (groupid=0, jobs=1): err= 0: pid=3172701: Wed Nov 20 10:06:14 2024 00:33:52.118 read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.6MiB/10219msec) 00:33:52.118 slat (nsec): min=10364, max=92287, avg=52894.67, stdev=12913.89 00:33:52.118 clat (msec): min=9, max=237, avg=26.58, stdev=10.90 00:33:52.118 lat (msec): min=9, max=237, avg=26.64, stdev=10.91 00:33:52.118 clat percentiles (msec): 00:33:52.118 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.118 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.118 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.118 | 99.00th=[ 29], 99.50th=[ 29], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.118 | 99.99th=[ 239] 00:33:52.119 bw ( KiB/s): min= 2176, max= 2560, per=4.25%, avg=2412.80, stdev=119.46, samples=20 00:33:52.119 iops : min= 544, max= 640, avg=603.20, stdev=29.87, samples=20 00:33:52.119 lat (msec) : 10=0.25%, 20=0.61%, 50=98.88%, 250=0.26% 00:33:52.119 cpu : usr=98.88%, sys=0.76%, ctx=16, majf=0, minf=44 00:33:52.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.119 filename1: (groupid=0, jobs=1): err= 0: pid=3172702: Wed Nov 20 10:06:14 2024 00:33:52.119 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.4MiB/10181msec) 00:33:52.119 slat (usec): min=4, max=103, avg=55.19, stdev=12.79 00:33:52.119 clat (msec): min=17, max=237, avg=26.75, stdev=10.96 00:33:52.119 lat (msec): min=17, max=237, avg=26.81, stdev=10.96 00:33:52.119 clat percentiles (msec): 00:33:52.119 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.119 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.119 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.119 | 99.00th=[ 29], 99.50th=[ 48], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.119 | 99.99th=[ 239] 00:33:52.119 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2387.40, stdev=85.67, samples=20 00:33:52.119 iops : min= 544, max= 640, avg=596.85, stdev=21.42, samples=20 00:33:52.119 lat (msec) : 20=0.03%, 50=99.70%, 250=0.27% 00:33:52.119 cpu : usr=98.50%, sys=0.95%, ctx=130, majf=0, minf=37 00:33:52.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.119 filename1: (groupid=0, jobs=1): err= 0: pid=3172703: Wed Nov 20 10:06:14 2024 00:33:52.119 read: IOPS=588, BW=2355KiB/s (2412kB/s)(23.4MiB/10191msec) 00:33:52.119 slat (usec): min=5, max=101, avg=50.52, stdev=12.12 00:33:52.119 clat (msec): min=24, max=238, avg=26.73, stdev=10.90 00:33:52.119 lat (msec): min=24, max=238, avg=26.78, stdev=10.90 00:33:52.119 clat percentiles (msec): 00:33:52.119 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.119 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.119 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.119 | 99.00th=[ 29], 99.50th=[ 34], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.119 | 99.99th=[ 239] 00:33:52.119 bw ( KiB/s): min= 2167, max= 2560, per=4.21%, avg=2393.15, stdev=111.60, samples=20 00:33:52.119 iops : min= 541, max= 640, avg=598.25, stdev=27.98, samples=20 00:33:52.119 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.119 cpu : usr=98.74%, sys=0.83%, ctx=104, majf=0, minf=35 00:33:52.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.119 filename1: (groupid=0, jobs=1): err= 0: pid=3172704: Wed Nov 20 10:06:14 2024 00:33:52.119 read: IOPS=588, BW=2355KiB/s (2412kB/s)(23.4MiB/10190msec) 00:33:52.119 slat (usec): min=6, max=110, avg=48.53, stdev=14.27 00:33:52.119 clat (msec): min=23, max=236, avg=26.78, stdev=10.89 00:33:52.119 lat (msec): min=23, max=236, avg=26.83, stdev=10.89 00:33:52.119 clat percentiles (msec): 00:33:52.119 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.119 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.119 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.119 | 99.00th=[ 29], 99.50th=[ 35], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.119 | 99.99th=[ 236] 00:33:52.119 bw ( KiB/s): min= 2176, max= 2560, per=4.21%, avg=2394.00, stdev=117.88, samples=20 00:33:52.119 iops : min= 544, max= 640, avg=598.50, stdev=29.47, samples=20 00:33:52.119 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.119 cpu : usr=98.80%, sys=0.81%, ctx=34, majf=0, minf=31 00:33:52.119 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.119 filename2: (groupid=0, jobs=1): err= 0: pid=3172705: Wed Nov 20 10:06:14 2024 00:33:52.119 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.4MiB/10182msec) 00:33:52.119 slat (usec): min=4, max=132, avg=55.44, stdev=13.56 00:33:52.119 clat (msec): min=16, max=236, avg=26.73, stdev=10.97 00:33:52.119 lat (msec): min=16, max=236, avg=26.79, stdev=10.97 00:33:52.119 clat percentiles (msec): 00:33:52.119 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.119 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.119 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.119 | 99.00th=[ 29], 99.50th=[ 50], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.119 | 99.99th=[ 236] 00:33:52.119 bw ( KiB/s): min= 2167, max= 2560, per=4.20%, avg=2386.75, stdev=87.05, samples=20 00:33:52.119 iops : min= 541, max= 640, avg=596.65, stdev=21.86, samples=20 00:33:52.119 lat (msec) : 20=0.03%, 50=99.70%, 250=0.27% 00:33:52.119 cpu : usr=98.89%, sys=0.75%, ctx=26, majf=0, minf=30 00:33:52.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.119 filename2: (groupid=0, jobs=1): err= 0: pid=3172706: Wed Nov 20 10:06:14 2024 00:33:52.119 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10004msec) 00:33:52.119 slat (usec): min=6, max=100, avg=26.12, stdev=20.20 00:33:52.119 clat (usec): min=9680, max=41609, avg=26266.70, stdev=1723.66 00:33:52.119 lat (usec): min=9707, max=41668, avg=26292.82, stdev=1721.06 00:33:52.119 clat percentiles (usec): 00:33:52.119 | 1.00th=[17171], 5.00th=[25297], 10.00th=[25560], 20.00th=[25560], 00:33:52.119 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:33:52.119 | 70.00th=[26608], 80.00th=[27132], 90.00th=[28181], 95.00th=[28443], 00:33:52.119 | 99.00th=[28967], 99.50th=[28967], 99.90th=[30016], 99.95th=[36439], 00:33:52.119 | 99.99th=[41681] 00:33:52.119 bw ( KiB/s): min= 2304, max= 2688, per=4.26%, avg=2418.53, stdev=103.59, samples=19 00:33:52.119 iops : min= 576, max= 672, avg=604.63, stdev=25.90, samples=19 00:33:52.119 lat (msec) : 10=0.17%, 20=0.99%, 50=98.84% 00:33:52.119 cpu : usr=98.37%, sys=1.08%, ctx=61, majf=0, minf=31 00:33:52.119 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.119 filename2: (groupid=0, jobs=1): err= 0: pid=3172707: Wed Nov 20 10:06:14 2024 00:33:52.119 read: IOPS=588, BW=2356KiB/s (2412kB/s)(23.4MiB/10187msec) 00:33:52.119 slat (usec): min=5, max=110, avg=50.00, stdev=12.86 00:33:52.119 clat (msec): min=24, max=236, avg=26.72, stdev=10.88 00:33:52.119 lat (msec): min=24, max=236, avg=26.77, stdev=10.88 00:33:52.119 clat percentiles (msec): 00:33:52.119 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.119 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.119 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.119 | 99.00th=[ 29], 99.50th=[ 32], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.119 | 99.99th=[ 236] 00:33:52.119 bw ( KiB/s): min= 2176, max= 2560, per=4.21%, avg=2393.60, stdev=102.57, samples=20 00:33:52.119 iops : min= 544, max= 640, avg=598.40, stdev=25.64, samples=20 00:33:52.119 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.119 cpu : usr=98.66%, sys=0.93%, ctx=55, majf=0, minf=35 00:33:52.119 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.119 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.119 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.119 filename2: (groupid=0, jobs=1): err= 0: pid=3172708: Wed Nov 20 10:06:14 2024 00:33:52.119 read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.6MiB/10216msec) 00:33:52.119 slat (usec): min=6, max=103, avg=47.95, stdev=13.35 00:33:52.120 clat (msec): min=7, max=238, avg=26.60, stdev=10.94 00:33:52.120 lat (msec): min=7, max=238, avg=26.65, stdev=10.94 00:33:52.120 clat percentiles (msec): 00:33:52.120 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.120 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.120 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.120 | 99.00th=[ 29], 99.50th=[ 29], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.120 | 99.99th=[ 239] 00:33:52.120 bw ( KiB/s): min= 2176, max= 2608, per=4.25%, avg=2415.20, stdev=115.78, samples=20 00:33:52.120 iops : min= 544, max= 652, avg=603.80, stdev=28.95, samples=20 00:33:52.120 lat (msec) : 10=0.17%, 20=1.07%, 50=98.50%, 250=0.26% 00:33:52.120 cpu : usr=98.87%, sys=0.77%, ctx=14, majf=0, minf=41 00:33:52.120 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:52.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 issued rwts: total=6054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.120 filename2: (groupid=0, jobs=1): err= 0: pid=3172709: Wed Nov 20 10:06:14 2024 00:33:52.120 read: IOPS=588, BW=2355KiB/s (2412kB/s)(23.4MiB/10190msec) 00:33:52.120 slat (usec): min=5, max=101, avg=50.45, stdev=12.78 00:33:52.120 clat (msec): min=24, max=236, avg=26.75, stdev=10.89 00:33:52.120 lat (msec): min=24, max=236, avg=26.80, stdev=10.89 00:33:52.120 clat percentiles (msec): 00:33:52.120 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.120 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.120 | 70.00th=[ 27], 80.00th=[ 28], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.120 | 99.00th=[ 29], 99.50th=[ 35], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.120 | 99.99th=[ 236] 00:33:52.120 bw ( KiB/s): min= 2176, max= 2560, per=4.21%, avg=2394.00, stdev=117.88, samples=20 00:33:52.120 iops : min= 544, max= 640, avg=598.50, stdev=29.47, samples=20 00:33:52.120 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.120 cpu : usr=98.89%, sys=0.75%, ctx=14, majf=0, minf=27 00:33:52.120 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.120 filename2: (groupid=0, jobs=1): err= 0: pid=3172710: Wed Nov 20 10:06:14 2024 00:33:52.120 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.4MiB/10182msec) 00:33:52.120 slat (usec): min=4, max=101, avg=56.28, stdev=12.59 00:33:52.120 clat (msec): min=24, max=237, avg=26.74, stdev=10.96 00:33:52.120 lat (msec): min=24, max=237, avg=26.79, stdev=10.96 00:33:52.120 clat percentiles (msec): 00:33:52.120 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.120 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.120 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.120 | 99.00th=[ 29], 99.50th=[ 48], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.120 | 99.99th=[ 239] 00:33:52.120 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2387.20, stdev=85.87, samples=20 00:33:52.120 iops : min= 544, max= 640, avg=596.80, stdev=21.47, samples=20 00:33:52.120 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.120 cpu : usr=98.96%, sys=0.70%, ctx=14, majf=0, minf=25 00:33:52.120 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.120 filename2: (groupid=0, jobs=1): err= 0: pid=3172712: Wed Nov 20 10:06:14 2024 00:33:52.120 read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.6MiB/10218msec) 00:33:52.120 slat (nsec): min=9943, max=93377, avg=51841.30, stdev=13804.27 00:33:52.120 clat (msec): min=9, max=237, avg=26.60, stdev=10.89 00:33:52.120 lat (msec): min=9, max=237, avg=26.65, stdev=10.89 00:33:52.120 clat percentiles (msec): 00:33:52.120 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.120 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.120 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.120 | 99.00th=[ 29], 99.50th=[ 29], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.120 | 99.99th=[ 239] 00:33:52.120 bw ( KiB/s): min= 2176, max= 2560, per=4.25%, avg=2412.80, stdev=119.46, samples=20 00:33:52.120 iops : min= 544, max= 640, avg=603.20, stdev=29.87, samples=20 00:33:52.120 lat (msec) : 10=0.25%, 20=0.61%, 50=98.88%, 250=0.26% 00:33:52.120 cpu : usr=98.93%, sys=0.71%, ctx=18, majf=0, minf=47 00:33:52.120 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.120 filename2: (groupid=0, jobs=1): err= 0: pid=3172713: Wed Nov 20 10:06:14 2024 00:33:52.120 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.4MiB/10181msec) 00:33:52.120 slat (usec): min=4, max=120, avg=56.54, stdev=12.81 00:33:52.120 clat (msec): min=24, max=237, avg=26.73, stdev=10.95 00:33:52.120 lat (msec): min=24, max=237, avg=26.78, stdev=10.95 00:33:52.120 clat percentiles (msec): 00:33:52.120 | 1.00th=[ 25], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 26], 00:33:52.120 | 30.00th=[ 26], 40.00th=[ 26], 50.00th=[ 26], 60.00th=[ 27], 00:33:52.120 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 28], 95.00th=[ 29], 00:33:52.120 | 99.00th=[ 29], 99.50th=[ 47], 99.90th=[ 236], 99.95th=[ 236], 00:33:52.120 | 99.99th=[ 239] 00:33:52.120 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2387.40, stdev=85.67, samples=20 00:33:52.120 iops : min= 544, max= 640, avg=596.85, stdev=21.42, samples=20 00:33:52.120 lat (msec) : 50=99.73%, 250=0.27% 00:33:52.120 cpu : usr=98.88%, sys=0.75%, ctx=24, majf=0, minf=29 00:33:52.120 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:52.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.120 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.120 00:33:52.120 Run status group 0 (all jobs): 00:33:52.120 READ: bw=55.5MiB/s (58.2MB/s), 2351KiB/s-2644KiB/s (2407kB/s-2708kB/s), io=568MiB (595MB), run=10004-10235msec 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:52.120 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 bdev_null0 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 [2024-11-20 10:06:14.383986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 bdev_null1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.121 { 00:33:52.121 "params": { 00:33:52.121 "name": "Nvme$subsystem", 00:33:52.121 "trtype": "$TEST_TRANSPORT", 00:33:52.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.121 "adrfam": "ipv4", 00:33:52.121 "trsvcid": "$NVMF_PORT", 00:33:52.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.121 "hdgst": ${hdgst:-false}, 00:33:52.121 "ddgst": ${ddgst:-false} 00:33:52.121 }, 00:33:52.121 "method": "bdev_nvme_attach_controller" 00:33:52.121 } 00:33:52.121 EOF 00:33:52.121 )") 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.121 { 00:33:52.121 "params": { 00:33:52.121 "name": "Nvme$subsystem", 00:33:52.121 "trtype": "$TEST_TRANSPORT", 00:33:52.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.121 "adrfam": "ipv4", 00:33:52.121 "trsvcid": "$NVMF_PORT", 00:33:52.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.121 "hdgst": ${hdgst:-false}, 00:33:52.121 "ddgst": ${ddgst:-false} 00:33:52.121 }, 00:33:52.121 "method": "bdev_nvme_attach_controller" 00:33:52.121 } 00:33:52.121 EOF 00:33:52.121 )") 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:52.121 10:06:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.121 "params": { 00:33:52.121 "name": "Nvme0", 00:33:52.121 "trtype": "tcp", 00:33:52.122 "traddr": "10.0.0.2", 00:33:52.122 "adrfam": "ipv4", 00:33:52.122 "trsvcid": "4420", 00:33:52.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.122 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.122 "hdgst": false, 00:33:52.122 "ddgst": false 00:33:52.122 }, 00:33:52.122 "method": "bdev_nvme_attach_controller" 00:33:52.122 },{ 00:33:52.122 "params": { 00:33:52.122 "name": "Nvme1", 00:33:52.122 "trtype": "tcp", 00:33:52.122 "traddr": "10.0.0.2", 00:33:52.122 "adrfam": "ipv4", 00:33:52.122 "trsvcid": "4420", 00:33:52.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.122 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.122 "hdgst": false, 00:33:52.122 "ddgst": false 00:33:52.122 }, 00:33:52.122 "method": "bdev_nvme_attach_controller" 00:33:52.122 }' 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:52.122 10:06:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.122 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:52.122 ... 00:33:52.122 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:52.122 ... 00:33:52.122 fio-3.35 00:33:52.122 Starting 4 threads 00:33:57.401 00:33:57.401 filename0: (groupid=0, jobs=1): err= 0: pid=3175020: Wed Nov 20 10:06:20 2024 00:33:57.401 read: IOPS=2851, BW=22.3MiB/s (23.4MB/s)(111MiB/5001msec) 00:33:57.401 slat (nsec): min=6110, max=44670, avg=9897.05, stdev=4122.27 00:33:57.401 clat (usec): min=664, max=5797, avg=2773.38, stdev=441.14 00:33:57.401 lat (usec): min=676, max=5816, avg=2783.28, stdev=441.25 00:33:57.401 clat percentiles (usec): 00:33:57.401 | 1.00th=[ 1762], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2442], 00:33:57.401 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2868], 00:33:57.401 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3490], 00:33:57.401 | 99.00th=[ 4113], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5014], 00:33:57.401 | 99.99th=[ 5800] 00:33:57.401 bw ( KiB/s): min=21312, max=25248, per=26.76%, avg=22716.44, stdev=1486.58, samples=9 00:33:57.401 iops : min= 2664, max= 3156, avg=2839.56, stdev=185.82, samples=9 00:33:57.401 lat (usec) : 750=0.01%, 1000=0.15% 00:33:57.401 lat (msec) : 2=2.15%, 4=96.36%, 10=1.33% 00:33:57.401 cpu : usr=93.36%, sys=4.68%, ctx=229, majf=0, minf=9 00:33:57.401 IO depths : 1=0.5%, 2=10.9%, 4=61.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.401 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.401 issued rwts: total=14259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.401 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:57.401 filename0: (groupid=0, jobs=1): err= 0: pid=3175021: Wed Nov 20 10:06:20 2024 00:33:57.401 read: IOPS=2545, BW=19.9MiB/s (20.9MB/s)(99.5MiB/5001msec) 00:33:57.401 slat (nsec): min=6112, max=36775, avg=9643.95, stdev=3785.94 00:33:57.401 clat (usec): min=966, max=5821, avg=3113.62, stdev=476.25 00:33:57.401 lat (usec): min=973, max=5827, avg=3123.26, stdev=475.76 00:33:57.401 clat percentiles (usec): 00:33:57.401 | 1.00th=[ 2040], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2802], 00:33:57.401 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:57.401 | 70.00th=[ 3228], 80.00th=[ 3425], 90.00th=[ 3720], 95.00th=[ 4015], 00:33:57.401 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5276], 99.95th=[ 5407], 00:33:57.401 | 99.99th=[ 5800] 00:33:57.401 bw ( KiB/s): min=19456, max=22016, per=24.08%, avg=20441.89, stdev=794.92, samples=9 00:33:57.401 iops : min= 2432, max= 2752, avg=2555.22, stdev=99.36, samples=9 00:33:57.401 lat (usec) : 1000=0.02% 00:33:57.401 lat (msec) : 2=0.75%, 4=93.86%, 10=5.37% 00:33:57.401 cpu : usr=93.82%, sys=4.34%, ctx=221, majf=0, minf=9 00:33:57.401 IO depths : 1=0.1%, 2=4.9%, 4=67.9%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.401 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.401 issued rwts: total=12730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.401 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:57.401 filename1: (groupid=0, jobs=1): err= 0: pid=3175022: Wed Nov 20 10:06:20 2024 00:33:57.401 read: IOPS=2691, BW=21.0MiB/s (22.1MB/s)(105MiB/5001msec) 00:33:57.401 slat (nsec): min=6110, max=35729, avg=9540.81, stdev=3407.51 00:33:57.401 clat (usec): min=647, max=5472, avg=2943.03, stdev=455.57 00:33:57.401 lat (usec): min=653, max=5484, avg=2952.57, stdev=455.44 00:33:57.401 clat percentiles (usec): 00:33:57.401 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2573], 00:33:57.401 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2999], 00:33:57.401 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3523], 95.00th=[ 3752], 00:33:57.401 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 5080], 99.95th=[ 5211], 00:33:57.401 | 99.99th=[ 5473] 00:33:57.401 bw ( KiB/s): min=19856, max=22428, per=25.24%, avg=21430.67, stdev=801.66, samples=9 00:33:57.401 iops : min= 2482, max= 2803, avg=2678.78, stdev=100.13, samples=9 00:33:57.401 lat (usec) : 750=0.02%, 1000=0.01% 00:33:57.401 lat (msec) : 2=0.85%, 4=96.08%, 10=3.03% 00:33:57.401 cpu : usr=97.10%, sys=2.58%, ctx=11, majf=0, minf=9 00:33:57.401 IO depths : 1=0.1%, 2=6.9%, 4=64.3%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.402 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.402 issued rwts: total=13461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.402 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:57.402 filename1: (groupid=0, jobs=1): err= 0: pid=3175023: Wed Nov 20 10:06:20 2024 00:33:57.402 read: IOPS=2524, BW=19.7MiB/s (20.7MB/s)(98.7MiB/5002msec) 00:33:57.402 slat (nsec): min=6112, max=42584, avg=9279.90, stdev=3630.16 00:33:57.402 clat (usec): min=952, max=5659, avg=3141.54, stdev=510.93 00:33:57.402 lat (usec): min=958, max=5671, avg=3150.82, stdev=510.50 00:33:57.402 clat percentiles (usec): 00:33:57.402 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2835], 00:33:57.402 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:57.402 | 70.00th=[ 3228], 80.00th=[ 3458], 90.00th=[ 3752], 95.00th=[ 4228], 00:33:57.402 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5211], 99.95th=[ 5342], 00:33:57.402 | 99.99th=[ 5669] 00:33:57.402 bw ( KiB/s): min=19488, max=21728, per=23.88%, avg=20273.78, stdev=722.42, samples=9 00:33:57.402 iops : min= 2436, max= 2716, avg=2534.22, stdev=90.30, samples=9 00:33:57.402 lat (usec) : 1000=0.04% 00:33:57.402 lat (msec) : 2=0.54%, 4=92.80%, 10=6.62% 00:33:57.402 cpu : usr=96.86%, sys=2.82%, ctx=9, majf=0, minf=9 00:33:57.402 IO depths : 1=0.3%, 2=2.3%, 4=70.0%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.402 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.402 issued rwts: total=12629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.402 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:57.402 00:33:57.402 Run status group 0 (all jobs): 00:33:57.402 READ: bw=82.9MiB/s (86.9MB/s), 19.7MiB/s-22.3MiB/s (20.7MB/s-23.4MB/s), io=415MiB (435MB), run=5001-5002msec 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.402 00:33:57.402 real 0m24.182s 00:33:57.402 user 4m56.462s 00:33:57.402 sys 0m4.443s 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:57.402 10:06:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:57.402 ************************************ 00:33:57.402 END TEST fio_dif_rand_params 00:33:57.402 ************************************ 00:33:57.402 10:06:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:57.402 10:06:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:57.402 10:06:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:57.402 10:06:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:57.662 ************************************ 00:33:57.662 START TEST fio_dif_digest 00:33:57.662 ************************************ 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.662 bdev_null0 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:57.662 [2024-11-20 10:06:20.784052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:57.662 { 00:33:57.662 "params": { 00:33:57.662 "name": "Nvme$subsystem", 00:33:57.662 "trtype": "$TEST_TRANSPORT", 00:33:57.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:57.662 "adrfam": "ipv4", 00:33:57.662 "trsvcid": "$NVMF_PORT", 00:33:57.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:57.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:57.662 "hdgst": ${hdgst:-false}, 00:33:57.662 "ddgst": ${ddgst:-false} 00:33:57.662 }, 00:33:57.662 "method": "bdev_nvme_attach_controller" 00:33:57.662 } 00:33:57.662 EOF 00:33:57.662 )") 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:57.662 "params": { 00:33:57.662 "name": "Nvme0", 00:33:57.662 "trtype": "tcp", 00:33:57.662 "traddr": "10.0.0.2", 00:33:57.662 "adrfam": "ipv4", 00:33:57.662 "trsvcid": "4420", 00:33:57.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:57.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:57.662 "hdgst": true, 00:33:57.662 "ddgst": true 00:33:57.662 }, 00:33:57.662 "method": "bdev_nvme_attach_controller" 00:33:57.662 }' 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:57.662 10:06:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:57.922 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:57.922 ... 00:33:57.922 fio-3.35 00:33:57.922 Starting 3 threads 00:34:10.124 00:34:10.124 filename0: (groupid=0, jobs=1): err= 0: pid=3176175: Wed Nov 20 10:06:31 2024 00:34:10.124 read: IOPS=255, BW=31.9MiB/s (33.4MB/s)(321MiB/10071msec) 00:34:10.124 slat (nsec): min=6432, max=51509, avg=12170.13, stdev=4535.61 00:34:10.124 clat (msec): min=7, max=101, avg=11.73, stdev= 9.48 00:34:10.124 lat (msec): min=7, max=101, avg=11.74, stdev= 9.49 00:34:10.124 clat percentiles (msec): 00:34:10.124 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:34:10.124 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:34:10.124 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 12], 00:34:10.124 | 99.00th=[ 61], 99.50th=[ 64], 99.90th=[ 95], 99.95th=[ 102], 00:34:10.124 | 99.99th=[ 103] 00:34:10.124 bw ( KiB/s): min= 6656, max=39936, per=36.60%, avg=32844.80, stdev=12404.25, samples=20 00:34:10.124 iops : min= 52, max= 312, avg=256.60, stdev=96.91, samples=20 00:34:10.124 lat (msec) : 10=54.11%, 20=42.08%, 50=0.23%, 100=3.50%, 250=0.08% 00:34:10.124 cpu : usr=95.95%, sys=3.72%, ctx=15, majf=0, minf=21 00:34:10.124 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.124 issued rwts: total=2569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.124 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.124 filename0: (groupid=0, jobs=1): err= 0: pid=3176176: Wed Nov 20 10:06:31 2024 00:34:10.124 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(275MiB/10067msec) 00:34:10.124 slat (nsec): min=6483, max=43430, avg=12501.38, stdev=2851.12 00:34:10.124 clat (msec): min=8, max=103, avg=13.70, stdev=11.36 00:34:10.124 lat (msec): min=8, max=103, avg=13.71, stdev=11.36 00:34:10.124 clat percentiles (msec): 00:34:10.124 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:34:10.124 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:34:10.124 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 14], 00:34:10.124 | 99.00th=[ 77], 99.50th=[ 81], 99.90th=[ 85], 99.95th=[ 86], 00:34:10.124 | 99.99th=[ 104] 00:34:10.124 bw ( KiB/s): min= 5120, max=34304, per=31.32%, avg=28108.80, stdev=10881.66, samples=20 00:34:10.125 iops : min= 40, max= 268, avg=219.60, stdev=85.01, samples=20 00:34:10.125 lat (msec) : 10=1.96%, 20=94.41%, 50=0.05%, 100=3.55%, 250=0.05% 00:34:10.125 cpu : usr=93.66%, sys=4.60%, ctx=566, majf=0, minf=32 00:34:10.125 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.125 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.125 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.125 filename0: (groupid=0, jobs=1): err= 0: pid=3176177: Wed Nov 20 10:06:31 2024 00:34:10.125 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(287MiB/10068msec) 00:34:10.125 slat (nsec): min=6378, max=36445, avg=11698.05, stdev=2359.98 00:34:10.125 clat (msec): min=8, max=111, avg=13.14, stdev=10.66 00:34:10.125 lat (msec): min=8, max=111, avg=13.15, stdev=10.67 00:34:10.125 clat percentiles (msec): 00:34:10.125 | 1.00th=[ 10], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:34:10.125 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:34:10.125 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 13], 95.00th=[ 13], 00:34:10.125 | 99.00th=[ 71], 99.50th=[ 73], 99.90th=[ 79], 99.95th=[ 80], 00:34:10.125 | 99.99th=[ 112] 00:34:10.125 bw ( KiB/s): min= 5376, max=35584, per=32.66%, avg=29312.00, stdev=11256.50, samples=20 00:34:10.125 iops : min= 42, max= 278, avg=229.00, stdev=87.94, samples=20 00:34:10.125 lat (msec) : 10=7.24%, 20=88.97%, 50=0.13%, 100=3.62%, 250=0.04% 00:34:10.125 cpu : usr=96.29%, sys=3.40%, ctx=18, majf=0, minf=25 00:34:10.125 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.125 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.125 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.125 00:34:10.125 Run status group 0 (all jobs): 00:34:10.125 READ: bw=87.6MiB/s (91.9MB/s), 27.3MiB/s-31.9MiB/s (28.6MB/s-33.4MB/s), io=883MiB (925MB), run=10067-10071msec 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.125 00:34:10.125 real 0m11.135s 00:34:10.125 user 0m35.448s 00:34:10.125 sys 0m1.467s 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.125 10:06:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.125 ************************************ 00:34:10.125 END TEST fio_dif_digest 00:34:10.125 ************************************ 00:34:10.125 10:06:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:10.125 10:06:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.125 rmmod nvme_tcp 00:34:10.125 rmmod nvme_fabrics 00:34:10.125 rmmod nvme_keyring 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3167281 ']' 00:34:10.125 10:06:31 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3167281 00:34:10.125 10:06:31 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3167281 ']' 00:34:10.125 10:06:31 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3167281 00:34:10.125 10:06:31 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:10.125 10:06:31 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.125 10:06:31 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3167281 00:34:10.125 10:06:32 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.125 10:06:32 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.125 10:06:32 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3167281' 00:34:10.125 killing process with pid 3167281 00:34:10.125 10:06:32 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3167281 00:34:10.125 10:06:32 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3167281 00:34:10.125 10:06:32 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:10.125 10:06:32 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:12.031 Waiting for block devices as requested 00:34:12.031 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:12.031 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:12.031 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:12.031 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:12.031 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:12.290 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:12.290 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:12.290 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:12.290 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:12.550 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:12.550 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:12.550 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:12.809 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:12.809 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:12.809 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:12.809 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:13.068 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:13.068 10:06:36 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:13.068 10:06:36 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:13.068 10:06:36 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:13.068 10:06:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:13.068 10:06:36 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:13.068 10:06:36 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:13.068 10:06:36 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:13.068 10:06:36 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:13.068 10:06:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.068 10:06:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.068 10:06:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.604 10:06:38 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:15.604 00:34:15.604 real 1m14.007s 00:34:15.604 user 7m13.200s 00:34:15.604 sys 0m19.552s 00:34:15.604 10:06:38 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:15.604 10:06:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:15.604 ************************************ 00:34:15.604 END TEST nvmf_dif 00:34:15.604 ************************************ 00:34:15.604 10:06:38 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:15.604 10:06:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:15.604 10:06:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:15.604 10:06:38 -- common/autotest_common.sh@10 -- # set +x 00:34:15.604 ************************************ 00:34:15.604 START TEST nvmf_abort_qd_sizes 00:34:15.604 ************************************ 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:15.604 * Looking for test storage... 00:34:15.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1703 -- # lcov --version 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:15.604 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:34:15.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.605 --rc genhtml_branch_coverage=1 00:34:15.605 --rc genhtml_function_coverage=1 00:34:15.605 --rc genhtml_legend=1 00:34:15.605 --rc geninfo_all_blocks=1 00:34:15.605 --rc geninfo_unexecuted_blocks=1 00:34:15.605 00:34:15.605 ' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:34:15.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.605 --rc genhtml_branch_coverage=1 00:34:15.605 --rc genhtml_function_coverage=1 00:34:15.605 --rc genhtml_legend=1 00:34:15.605 --rc geninfo_all_blocks=1 00:34:15.605 --rc geninfo_unexecuted_blocks=1 00:34:15.605 00:34:15.605 ' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:34:15.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.605 --rc genhtml_branch_coverage=1 00:34:15.605 --rc genhtml_function_coverage=1 00:34:15.605 --rc genhtml_legend=1 00:34:15.605 --rc geninfo_all_blocks=1 00:34:15.605 --rc geninfo_unexecuted_blocks=1 00:34:15.605 00:34:15.605 ' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:34:15.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.605 --rc genhtml_branch_coverage=1 00:34:15.605 --rc genhtml_function_coverage=1 00:34:15.605 --rc genhtml_legend=1 00:34:15.605 --rc geninfo_all_blocks=1 00:34:15.605 --rc geninfo_unexecuted_blocks=1 00:34:15.605 00:34:15.605 ' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:15.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:15.605 10:06:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.170 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:22.171 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:22.171 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:22.171 Found net devices under 0000:86:00.0: cvl_0_0 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:22.171 Found net devices under 0000:86:00.1: cvl_0_1 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:34:22.171 00:34:22.171 --- 10.0.0.2 ping statistics --- 00:34:22.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.171 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:34:22.171 00:34:22.171 --- 10.0.0.1 ping statistics --- 00:34:22.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.171 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:22.171 10:06:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:24.076 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:24.076 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:25.152 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3183977 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3183977 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3183977 ']' 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:25.152 10:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.152 [2024-11-20 10:06:48.366492] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:34:25.152 [2024-11-20 10:06:48.366538] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.152 [2024-11-20 10:06:48.447392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:25.410 [2024-11-20 10:06:48.495383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.410 [2024-11-20 10:06:48.495417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.410 [2024-11-20 10:06:48.495425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.410 [2024-11-20 10:06:48.495431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.410 [2024-11-20 10:06:48.495437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.410 [2024-11-20 10:06:48.497021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.410 [2024-11-20 10:06:48.497134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:25.410 [2024-11-20 10:06:48.497240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.410 [2024-11-20 10:06:48.497241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:25.980 10:06:49 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:25.981 10:06:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:25.981 10:06:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:25.981 10:06:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:25.981 10:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:25.981 10:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.981 10:06:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.981 ************************************ 00:34:25.981 START TEST spdk_target_abort 00:34:25.981 ************************************ 00:34:25.981 10:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:25.981 10:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:25.981 10:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:25.981 10:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.981 10:06:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.262 spdk_targetn1 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.262 [2024-11-20 10:06:52.130148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.262 [2024-11-20 10:06:52.177354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:29.262 10:06:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:32.542 Initializing NVMe Controllers 00:34:32.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:32.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:32.542 Initialization complete. Launching workers. 00:34:32.542 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16022, failed: 0 00:34:32.542 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1333, failed to submit 14689 00:34:32.542 success 744, unsuccessful 589, failed 0 00:34:32.542 10:06:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:32.542 10:06:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:35.820 Initializing NVMe Controllers 00:34:35.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:35.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:35.820 Initialization complete. Launching workers. 00:34:35.820 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8535, failed: 0 00:34:35.820 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1273, failed to submit 7262 00:34:35.820 success 328, unsuccessful 945, failed 0 00:34:35.820 10:06:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:35.820 10:06:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:39.104 Initializing NVMe Controllers 00:34:39.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:39.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:39.104 Initialization complete. Launching workers. 00:34:39.104 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37371, failed: 0 00:34:39.104 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2861, failed to submit 34510 00:34:39.104 success 586, unsuccessful 2275, failed 0 00:34:39.104 10:07:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:39.104 10:07:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.104 10:07:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:39.104 10:07:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.104 10:07:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:39.104 10:07:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.104 10:07:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3183977 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3183977 ']' 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3183977 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3183977 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3183977' 00:34:40.037 killing process with pid 3183977 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3183977 00:34:40.037 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3183977 00:34:40.295 00:34:40.295 real 0m14.126s 00:34:40.295 user 0m56.311s 00:34:40.295 sys 0m2.584s 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.295 ************************************ 00:34:40.295 END TEST spdk_target_abort 00:34:40.295 ************************************ 00:34:40.295 10:07:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:40.295 10:07:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:40.295 10:07:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:40.295 10:07:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:40.295 ************************************ 00:34:40.295 START TEST kernel_target_abort 00:34:40.295 ************************************ 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:40.295 10:07:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:43.585 Waiting for block devices as requested 00:34:43.585 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:43.585 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:43.585 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:43.585 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:43.585 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:43.585 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:43.585 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:43.585 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:43.844 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:43.844 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:43.844 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:43.844 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:44.103 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:44.103 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:44.103 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:44.362 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:44.362 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:44.362 No valid GPT data, bailing 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:44.362 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:44.621 00:34:44.621 Discovery Log Number of Records 2, Generation counter 2 00:34:44.621 =====Discovery Log Entry 0====== 00:34:44.621 trtype: tcp 00:34:44.621 adrfam: ipv4 00:34:44.621 subtype: current discovery subsystem 00:34:44.621 treq: not specified, sq flow control disable supported 00:34:44.621 portid: 1 00:34:44.621 trsvcid: 4420 00:34:44.621 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:44.621 traddr: 10.0.0.1 00:34:44.621 eflags: none 00:34:44.621 sectype: none 00:34:44.621 =====Discovery Log Entry 1====== 00:34:44.621 trtype: tcp 00:34:44.621 adrfam: ipv4 00:34:44.621 subtype: nvme subsystem 00:34:44.621 treq: not specified, sq flow control disable supported 00:34:44.621 portid: 1 00:34:44.621 trsvcid: 4420 00:34:44.621 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:44.621 traddr: 10.0.0.1 00:34:44.621 eflags: none 00:34:44.621 sectype: none 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:44.621 10:07:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:47.910 Initializing NVMe Controllers 00:34:47.910 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:47.910 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:47.910 Initialization complete. Launching workers. 00:34:47.910 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92297, failed: 0 00:34:47.910 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92297, failed to submit 0 00:34:47.910 success 0, unsuccessful 92297, failed 0 00:34:47.910 10:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:47.910 10:07:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:51.198 Initializing NVMe Controllers 00:34:51.198 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:51.198 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:51.198 Initialization complete. Launching workers. 00:34:51.198 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145794, failed: 0 00:34:51.199 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36490, failed to submit 109304 00:34:51.199 success 0, unsuccessful 36490, failed 0 00:34:51.199 10:07:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:51.199 10:07:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:54.482 Initializing NVMe Controllers 00:34:54.482 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:54.482 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:54.482 Initialization complete. Launching workers. 00:34:54.482 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137933, failed: 0 00:34:54.482 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34546, failed to submit 103387 00:34:54.482 success 0, unsuccessful 34546, failed 0 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:54.482 10:07:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:57.018 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:57.018 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:57.585 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:57.845 00:34:57.845 real 0m17.506s 00:34:57.845 user 0m9.207s 00:34:57.845 sys 0m5.021s 00:34:57.845 10:07:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:57.845 10:07:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:57.845 ************************************ 00:34:57.845 END TEST kernel_target_abort 00:34:57.845 ************************************ 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:57.845 rmmod nvme_tcp 00:34:57.845 rmmod nvme_fabrics 00:34:57.845 rmmod nvme_keyring 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3183977 ']' 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3183977 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3183977 ']' 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3183977 00:34:57.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3183977) - No such process 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3183977 is not found' 00:34:57.845 Process with pid 3183977 is not found 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:57.845 10:07:21 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:01.136 Waiting for block devices as requested 00:35:01.136 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:01.136 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:01.136 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:01.136 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:01.136 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:01.136 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:01.136 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:01.136 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:01.395 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:01.395 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:01.395 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:01.395 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:01.654 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:01.654 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:01.654 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:01.913 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:01.913 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:01.913 10:07:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.449 10:07:27 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:04.449 00:35:04.449 real 0m48.838s 00:35:04.449 user 1m10.027s 00:35:04.449 sys 0m16.349s 00:35:04.449 10:07:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:04.449 10:07:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:04.449 ************************************ 00:35:04.449 END TEST nvmf_abort_qd_sizes 00:35:04.449 ************************************ 00:35:04.449 10:07:27 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:04.449 10:07:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:04.449 10:07:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:04.449 10:07:27 -- common/autotest_common.sh@10 -- # set +x 00:35:04.449 ************************************ 00:35:04.449 START TEST keyring_file 00:35:04.449 ************************************ 00:35:04.449 10:07:27 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:04.449 * Looking for test storage... 00:35:04.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:04.449 10:07:27 keyring_file -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:35:04.449 10:07:27 keyring_file -- common/autotest_common.sh@1703 -- # lcov --version 00:35:04.449 10:07:27 keyring_file -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:35:04.449 10:07:27 keyring_file -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:04.449 10:07:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:04.449 10:07:27 keyring_file -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:04.449 10:07:27 keyring_file -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:35:04.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.449 --rc genhtml_branch_coverage=1 00:35:04.449 --rc genhtml_function_coverage=1 00:35:04.449 --rc genhtml_legend=1 00:35:04.449 --rc geninfo_all_blocks=1 00:35:04.449 --rc geninfo_unexecuted_blocks=1 00:35:04.449 00:35:04.449 ' 00:35:04.449 10:07:27 keyring_file -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:35:04.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.449 --rc genhtml_branch_coverage=1 00:35:04.449 --rc genhtml_function_coverage=1 00:35:04.449 --rc genhtml_legend=1 00:35:04.449 --rc geninfo_all_blocks=1 00:35:04.449 --rc geninfo_unexecuted_blocks=1 00:35:04.449 00:35:04.449 ' 00:35:04.449 10:07:27 keyring_file -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:35:04.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.449 --rc genhtml_branch_coverage=1 00:35:04.450 --rc genhtml_function_coverage=1 00:35:04.450 --rc genhtml_legend=1 00:35:04.450 --rc geninfo_all_blocks=1 00:35:04.450 --rc geninfo_unexecuted_blocks=1 00:35:04.450 00:35:04.450 ' 00:35:04.450 10:07:27 keyring_file -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:35:04.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.450 --rc genhtml_branch_coverage=1 00:35:04.450 --rc genhtml_function_coverage=1 00:35:04.450 --rc genhtml_legend=1 00:35:04.450 --rc geninfo_all_blocks=1 00:35:04.450 --rc geninfo_unexecuted_blocks=1 00:35:04.450 00:35:04.450 ' 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.450 10:07:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.450 10:07:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.450 10:07:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.450 10:07:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.450 10:07:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.450 10:07:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.450 10:07:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.450 10:07:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:04.450 10:07:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:04.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pBM9ujHdXZ 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pBM9ujHdXZ 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pBM9ujHdXZ 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.pBM9ujHdXZ 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.08Ww5SBgeI 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:04.450 10:07:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.08Ww5SBgeI 00:35:04.450 10:07:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.08Ww5SBgeI 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.08Ww5SBgeI 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=3192767 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:04.450 10:07:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3192767 00:35:04.450 10:07:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3192767 ']' 00:35:04.450 10:07:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.450 10:07:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.450 10:07:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.450 10:07:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.450 10:07:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:04.450 [2024-11-20 10:07:27.723864] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:35:04.450 [2024-11-20 10:07:27.723913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192767 ] 00:35:04.709 [2024-11-20 10:07:27.799831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.709 [2024-11-20 10:07:27.842186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:04.967 10:07:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:04.967 [2024-11-20 10:07:28.049621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.967 null0 00:35:04.967 [2024-11-20 10:07:28.081671] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:04.967 [2024-11-20 10:07:28.082040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.967 10:07:28 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.967 10:07:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:04.967 [2024-11-20 10:07:28.109747] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:04.967 request: 00:35:04.967 { 00:35:04.967 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:04.968 "secure_channel": false, 00:35:04.968 "listen_address": { 00:35:04.968 "trtype": "tcp", 00:35:04.968 "traddr": "127.0.0.1", 00:35:04.968 "trsvcid": "4420" 00:35:04.968 }, 00:35:04.968 "method": "nvmf_subsystem_add_listener", 00:35:04.968 "req_id": 1 00:35:04.968 } 00:35:04.968 Got JSON-RPC error response 00:35:04.968 response: 00:35:04.968 { 00:35:04.968 "code": -32602, 00:35:04.968 "message": "Invalid parameters" 00:35:04.968 } 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:04.968 10:07:28 keyring_file -- keyring/file.sh@47 -- # bperfpid=3192782 00:35:04.968 10:07:28 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3192782 /var/tmp/bperf.sock 00:35:04.968 10:07:28 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3192782 ']' 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:04.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.968 10:07:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:04.968 [2024-11-20 10:07:28.162234] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:35:04.968 [2024-11-20 10:07:28.162274] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192782 ] 00:35:04.968 [2024-11-20 10:07:28.236153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.968 [2024-11-20 10:07:28.278689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.225 10:07:28 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.225 10:07:28 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:05.225 10:07:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pBM9ujHdXZ 00:35:05.225 10:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pBM9ujHdXZ 00:35:05.484 10:07:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.08Ww5SBgeI 00:35:05.484 10:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.08Ww5SBgeI 00:35:05.484 10:07:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:05.484 10:07:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:05.484 10:07:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.484 10:07:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:05.484 10:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.742 10:07:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.pBM9ujHdXZ == \/\t\m\p\/\t\m\p\.\p\B\M\9\u\j\H\d\X\Z ]] 00:35:05.742 10:07:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:05.742 10:07:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:05.742 10:07:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.742 10:07:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:05.742 10:07:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.002 10:07:29 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.08Ww5SBgeI == \/\t\m\p\/\t\m\p\.\0\8\W\w\5\S\B\g\e\I ]] 00:35:06.002 10:07:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:06.002 10:07:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.002 10:07:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.002 10:07:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.002 10:07:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.002 10:07:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.261 10:07:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:06.261 10:07:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:06.261 10:07:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:06.261 10:07:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.261 10:07:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.261 10:07:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:06.261 10:07:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.261 10:07:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:06.261 10:07:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:06.261 10:07:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:06.518 [2024-11-20 10:07:29.737063] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:06.519 nvme0n1 00:35:06.519 10:07:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:06.519 10:07:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.519 10:07:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.519 10:07:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.519 10:07:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.519 10:07:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.777 10:07:30 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:06.777 10:07:30 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:06.777 10:07:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:06.777 10:07:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.777 10:07:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.777 10:07:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.777 10:07:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:07.035 10:07:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:07.036 10:07:30 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:07.036 Running I/O for 1 seconds... 00:35:08.412 18469.00 IOPS, 72.14 MiB/s 00:35:08.412 Latency(us) 00:35:08.412 [2024-11-20T09:07:31.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.412 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:08.412 nvme0n1 : 1.00 18514.46 72.32 0.00 0.00 6901.11 4445.05 17894.18 00:35:08.412 [2024-11-20T09:07:31.744Z] =================================================================================================================== 00:35:08.412 [2024-11-20T09:07:31.744Z] Total : 18514.46 72.32 0.00 0.00 6901.11 4445.05 17894.18 00:35:08.412 { 00:35:08.412 "results": [ 00:35:08.412 { 00:35:08.412 "job": "nvme0n1", 00:35:08.412 "core_mask": "0x2", 00:35:08.412 "workload": "randrw", 00:35:08.412 "percentage": 50, 00:35:08.412 "status": "finished", 00:35:08.412 "queue_depth": 128, 00:35:08.412 "io_size": 4096, 00:35:08.412 "runtime": 1.004458, 00:35:08.412 "iops": 18514.46252605883, 00:35:08.412 "mibps": 72.3221192424173, 00:35:08.412 "io_failed": 0, 00:35:08.412 "io_timeout": 0, 00:35:08.412 "avg_latency_us": 6901.1063824693565, 00:35:08.412 "min_latency_us": 4445.050434782609, 00:35:08.412 "max_latency_us": 17894.177391304347 00:35:08.412 } 00:35:08.412 ], 00:35:08.412 "core_count": 1 00:35:08.412 } 00:35:08.412 10:07:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:08.412 10:07:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:08.412 10:07:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:08.412 10:07:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:08.412 10:07:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.412 10:07:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.412 10:07:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.412 10:07:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:08.670 10:07:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:08.670 10:07:31 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:08.670 10:07:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:08.670 10:07:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.670 10:07:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.670 10:07:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:08.670 10:07:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.670 10:07:31 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:08.670 10:07:31 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:08.670 10:07:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:08.670 10:07:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:08.670 10:07:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:08.670 10:07:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.670 10:07:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:08.670 10:07:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:08.670 10:07:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:08.670 10:07:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:08.929 [2024-11-20 10:07:32.134259] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:08.929 [2024-11-20 10:07:32.135049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1fd00 (107): Transport endpoint is not connected 00:35:08.929 [2024-11-20 10:07:32.136042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1fd00 (9): Bad file descriptor 00:35:08.929 [2024-11-20 10:07:32.137044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:08.929 [2024-11-20 10:07:32.137054] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:08.929 [2024-11-20 10:07:32.137061] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:08.929 [2024-11-20 10:07:32.137069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:08.929 request: 00:35:08.929 { 00:35:08.929 "name": "nvme0", 00:35:08.929 "trtype": "tcp", 00:35:08.929 "traddr": "127.0.0.1", 00:35:08.929 "adrfam": "ipv4", 00:35:08.929 "trsvcid": "4420", 00:35:08.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.929 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.929 "prchk_reftag": false, 00:35:08.929 "prchk_guard": false, 00:35:08.929 "hdgst": false, 00:35:08.929 "ddgst": false, 00:35:08.929 "psk": "key1", 00:35:08.929 "allow_unrecognized_csi": false, 00:35:08.929 "method": "bdev_nvme_attach_controller", 00:35:08.929 "req_id": 1 00:35:08.929 } 00:35:08.929 Got JSON-RPC error response 00:35:08.929 response: 00:35:08.929 { 00:35:08.929 "code": -5, 00:35:08.929 "message": "Input/output error" 00:35:08.929 } 00:35:08.929 10:07:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:08.929 10:07:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:08.929 10:07:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:08.929 10:07:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:08.929 10:07:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:08.929 10:07:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:08.929 10:07:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.929 10:07:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.929 10:07:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:08.929 10:07:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.188 10:07:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:09.188 10:07:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:09.188 10:07:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:09.188 10:07:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.188 10:07:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.188 10:07:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:09.188 10:07:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.448 10:07:32 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:09.448 10:07:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:09.448 10:07:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:09.448 10:07:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:09.448 10:07:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:09.707 10:07:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:09.707 10:07:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.707 10:07:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:09.965 10:07:33 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:09.965 10:07:33 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.pBM9ujHdXZ 00:35:09.965 10:07:33 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.pBM9ujHdXZ 00:35:09.965 10:07:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:09.965 10:07:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.pBM9ujHdXZ 00:35:09.965 10:07:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:09.965 10:07:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.965 10:07:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:09.965 10:07:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.965 10:07:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pBM9ujHdXZ 00:35:09.965 10:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pBM9ujHdXZ 00:35:10.223 [2024-11-20 10:07:33.327179] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pBM9ujHdXZ': 0100660 00:35:10.223 [2024-11-20 10:07:33.327205] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:10.223 request: 00:35:10.223 { 00:35:10.223 "name": "key0", 00:35:10.223 "path": "/tmp/tmp.pBM9ujHdXZ", 00:35:10.223 "method": "keyring_file_add_key", 00:35:10.223 "req_id": 1 00:35:10.223 } 00:35:10.223 Got JSON-RPC error response 00:35:10.223 response: 00:35:10.223 { 00:35:10.223 "code": -1, 00:35:10.223 "message": "Operation not permitted" 00:35:10.223 } 00:35:10.223 10:07:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:10.223 10:07:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:10.223 10:07:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:10.223 10:07:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:10.223 10:07:33 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.pBM9ujHdXZ 00:35:10.223 10:07:33 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pBM9ujHdXZ 00:35:10.223 10:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pBM9ujHdXZ 00:35:10.223 10:07:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.pBM9ujHdXZ 00:35:10.223 10:07:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:10.223 10:07:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:10.223 10:07:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.223 10:07:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.223 10:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.223 10:07:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.482 10:07:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:10.482 10:07:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.482 10:07:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:10.482 10:07:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.482 10:07:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:10.482 10:07:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:10.482 10:07:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:10.482 10:07:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:10.482 10:07:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.482 10:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.740 [2024-11-20 10:07:33.908728] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.pBM9ujHdXZ': No such file or directory 00:35:10.740 [2024-11-20 10:07:33.908751] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:10.740 [2024-11-20 10:07:33.908767] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:10.740 [2024-11-20 10:07:33.908790] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:10.740 [2024-11-20 10:07:33.908802] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:10.740 [2024-11-20 10:07:33.908809] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:10.740 request: 00:35:10.740 { 00:35:10.740 "name": "nvme0", 00:35:10.740 "trtype": "tcp", 00:35:10.740 "traddr": "127.0.0.1", 00:35:10.740 "adrfam": "ipv4", 00:35:10.740 "trsvcid": "4420", 00:35:10.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.740 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.740 "prchk_reftag": false, 00:35:10.740 "prchk_guard": false, 00:35:10.740 "hdgst": false, 00:35:10.740 "ddgst": false, 00:35:10.740 "psk": "key0", 00:35:10.740 "allow_unrecognized_csi": false, 00:35:10.740 "method": "bdev_nvme_attach_controller", 00:35:10.740 "req_id": 1 00:35:10.740 } 00:35:10.740 Got JSON-RPC error response 00:35:10.740 response: 00:35:10.740 { 00:35:10.740 "code": -19, 00:35:10.740 "message": "No such device" 00:35:10.740 } 00:35:10.740 10:07:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:10.740 10:07:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:10.740 10:07:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:10.740 10:07:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:10.740 10:07:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:10.740 10:07:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:10.999 10:07:34 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7gRuJs9Osk 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:10.999 10:07:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:10.999 10:07:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:10.999 10:07:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:10.999 10:07:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:10.999 10:07:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:10.999 10:07:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7gRuJs9Osk 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7gRuJs9Osk 00:35:10.999 10:07:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.7gRuJs9Osk 00:35:10.999 10:07:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7gRuJs9Osk 00:35:10.999 10:07:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7gRuJs9Osk 00:35:11.257 10:07:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.257 10:07:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.515 nvme0n1 00:35:11.515 10:07:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:11.515 10:07:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.515 10:07:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.515 10:07:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.515 10:07:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.515 10:07:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.515 10:07:34 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:11.515 10:07:34 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:11.515 10:07:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:11.773 10:07:35 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:11.773 10:07:35 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:11.773 10:07:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.773 10:07:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.773 10:07:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.032 10:07:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:12.032 10:07:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:12.032 10:07:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.032 10:07:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.032 10:07:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.032 10:07:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.032 10:07:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.290 10:07:35 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:12.290 10:07:35 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:12.290 10:07:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:12.290 10:07:35 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:12.290 10:07:35 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:12.290 10:07:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.548 10:07:35 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:12.548 10:07:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7gRuJs9Osk 00:35:12.548 10:07:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7gRuJs9Osk 00:35:12.807 10:07:36 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.08Ww5SBgeI 00:35:12.807 10:07:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.08Ww5SBgeI 00:35:13.066 10:07:36 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:13.066 10:07:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:13.324 nvme0n1 00:35:13.324 10:07:36 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:13.325 10:07:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:13.583 10:07:36 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:13.583 "subsystems": [ 00:35:13.583 { 00:35:13.583 "subsystem": "keyring", 00:35:13.583 "config": [ 00:35:13.583 { 00:35:13.583 "method": "keyring_file_add_key", 00:35:13.583 "params": { 00:35:13.583 "name": "key0", 00:35:13.583 "path": "/tmp/tmp.7gRuJs9Osk" 00:35:13.583 } 00:35:13.583 }, 00:35:13.583 { 00:35:13.583 "method": "keyring_file_add_key", 00:35:13.583 "params": { 00:35:13.583 "name": "key1", 00:35:13.583 "path": "/tmp/tmp.08Ww5SBgeI" 00:35:13.583 } 00:35:13.583 } 00:35:13.583 ] 00:35:13.583 }, 00:35:13.583 { 00:35:13.584 "subsystem": "iobuf", 00:35:13.584 "config": [ 00:35:13.584 { 00:35:13.584 "method": "iobuf_set_options", 00:35:13.584 "params": { 00:35:13.584 "small_pool_count": 8192, 00:35:13.584 "large_pool_count": 1024, 00:35:13.584 "small_bufsize": 8192, 00:35:13.584 "large_bufsize": 135168, 00:35:13.584 "enable_numa": false 00:35:13.584 } 00:35:13.584 } 00:35:13.584 ] 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "subsystem": "sock", 00:35:13.584 "config": [ 00:35:13.584 { 00:35:13.584 "method": "sock_set_default_impl", 00:35:13.584 "params": { 00:35:13.584 "impl_name": "posix" 00:35:13.584 } 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "method": "sock_impl_set_options", 00:35:13.584 "params": { 00:35:13.584 "impl_name": "ssl", 00:35:13.584 "recv_buf_size": 4096, 00:35:13.584 "send_buf_size": 4096, 00:35:13.584 "enable_recv_pipe": true, 00:35:13.584 "enable_quickack": false, 00:35:13.584 "enable_placement_id": 0, 00:35:13.584 "enable_zerocopy_send_server": true, 00:35:13.584 "enable_zerocopy_send_client": false, 00:35:13.584 "zerocopy_threshold": 0, 00:35:13.584 "tls_version": 0, 00:35:13.584 "enable_ktls": false 00:35:13.584 } 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "method": "sock_impl_set_options", 00:35:13.584 "params": { 00:35:13.584 "impl_name": "posix", 00:35:13.584 "recv_buf_size": 2097152, 00:35:13.584 "send_buf_size": 2097152, 00:35:13.584 "enable_recv_pipe": true, 00:35:13.584 "enable_quickack": false, 00:35:13.584 "enable_placement_id": 0, 00:35:13.584 "enable_zerocopy_send_server": true, 00:35:13.584 "enable_zerocopy_send_client": false, 00:35:13.584 "zerocopy_threshold": 0, 00:35:13.584 "tls_version": 0, 00:35:13.584 "enable_ktls": false 00:35:13.584 } 00:35:13.584 } 00:35:13.584 ] 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "subsystem": "vmd", 00:35:13.584 "config": [] 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "subsystem": "accel", 00:35:13.584 "config": [ 00:35:13.584 { 00:35:13.584 "method": "accel_set_options", 00:35:13.584 "params": { 00:35:13.584 "small_cache_size": 128, 00:35:13.584 "large_cache_size": 16, 00:35:13.584 "task_count": 2048, 00:35:13.584 "sequence_count": 2048, 00:35:13.584 "buf_count": 2048 00:35:13.584 } 00:35:13.584 } 00:35:13.584 ] 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "subsystem": "bdev", 00:35:13.584 "config": [ 00:35:13.584 { 00:35:13.584 "method": "bdev_set_options", 00:35:13.584 "params": { 00:35:13.584 "bdev_io_pool_size": 65535, 00:35:13.584 "bdev_io_cache_size": 256, 00:35:13.584 "bdev_auto_examine": true, 00:35:13.584 "iobuf_small_cache_size": 128, 00:35:13.584 "iobuf_large_cache_size": 16 00:35:13.584 } 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "method": "bdev_raid_set_options", 00:35:13.584 "params": { 00:35:13.584 "process_window_size_kb": 1024, 00:35:13.584 "process_max_bandwidth_mb_sec": 0 00:35:13.584 } 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "method": "bdev_iscsi_set_options", 00:35:13.584 "params": { 00:35:13.584 "timeout_sec": 30 00:35:13.584 } 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "method": "bdev_nvme_set_options", 00:35:13.584 "params": { 00:35:13.584 "action_on_timeout": "none", 00:35:13.584 "timeout_us": 0, 00:35:13.584 "timeout_admin_us": 0, 00:35:13.584 "keep_alive_timeout_ms": 10000, 00:35:13.584 "arbitration_burst": 0, 00:35:13.584 "low_priority_weight": 0, 00:35:13.584 "medium_priority_weight": 0, 00:35:13.584 "high_priority_weight": 0, 00:35:13.584 "nvme_adminq_poll_period_us": 10000, 00:35:13.584 "nvme_ioq_poll_period_us": 0, 00:35:13.584 "io_queue_requests": 512, 00:35:13.584 "delay_cmd_submit": true, 00:35:13.584 "transport_retry_count": 4, 00:35:13.584 "bdev_retry_count": 3, 00:35:13.584 "transport_ack_timeout": 0, 00:35:13.584 "ctrlr_loss_timeout_sec": 0, 00:35:13.584 "reconnect_delay_sec": 0, 00:35:13.584 "fast_io_fail_timeout_sec": 0, 00:35:13.584 "disable_auto_failback": false, 00:35:13.584 "generate_uuids": false, 00:35:13.584 "transport_tos": 0, 00:35:13.584 "nvme_error_stat": false, 00:35:13.584 "rdma_srq_size": 0, 00:35:13.584 "io_path_stat": false, 00:35:13.584 "allow_accel_sequence": false, 00:35:13.584 "rdma_max_cq_size": 0, 00:35:13.584 "rdma_cm_event_timeout_ms": 0, 00:35:13.584 "dhchap_digests": [ 00:35:13.584 "sha256", 00:35:13.584 "sha384", 00:35:13.584 "sha512" 00:35:13.584 ], 00:35:13.584 "dhchap_dhgroups": [ 00:35:13.584 "null", 00:35:13.584 "ffdhe2048", 00:35:13.584 "ffdhe3072", 00:35:13.584 "ffdhe4096", 00:35:13.584 "ffdhe6144", 00:35:13.584 "ffdhe8192" 00:35:13.584 ] 00:35:13.584 } 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "method": "bdev_nvme_attach_controller", 00:35:13.584 "params": { 00:35:13.584 "name": "nvme0", 00:35:13.584 "trtype": "TCP", 00:35:13.584 "adrfam": "IPv4", 00:35:13.584 "traddr": "127.0.0.1", 00:35:13.584 "trsvcid": "4420", 00:35:13.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.584 "prchk_reftag": false, 00:35:13.584 "prchk_guard": false, 00:35:13.584 "ctrlr_loss_timeout_sec": 0, 00:35:13.584 "reconnect_delay_sec": 0, 00:35:13.584 "fast_io_fail_timeout_sec": 0, 00:35:13.584 "psk": "key0", 00:35:13.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.584 "hdgst": false, 00:35:13.584 "ddgst": false, 00:35:13.584 "multipath": "multipath" 00:35:13.584 } 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "method": "bdev_nvme_set_hotplug", 00:35:13.584 "params": { 00:35:13.584 "period_us": 100000, 00:35:13.584 "enable": false 00:35:13.584 } 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "method": "bdev_wait_for_examine" 00:35:13.584 } 00:35:13.584 ] 00:35:13.584 }, 00:35:13.584 { 00:35:13.584 "subsystem": "nbd", 00:35:13.584 "config": [] 00:35:13.584 } 00:35:13.584 ] 00:35:13.584 }' 00:35:13.584 10:07:36 keyring_file -- keyring/file.sh@115 -- # killprocess 3192782 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3192782 ']' 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3192782 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3192782 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3192782' 00:35:13.584 killing process with pid 3192782 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@973 -- # kill 3192782 00:35:13.584 Received shutdown signal, test time was about 1.000000 seconds 00:35:13.584 00:35:13.584 Latency(us) 00:35:13.584 [2024-11-20T09:07:36.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.584 [2024-11-20T09:07:36.916Z] =================================================================================================================== 00:35:13.584 [2024-11-20T09:07:36.916Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:13.584 10:07:36 keyring_file -- common/autotest_common.sh@978 -- # wait 3192782 00:35:13.844 10:07:36 keyring_file -- keyring/file.sh@118 -- # bperfpid=3194291 00:35:13.844 10:07:36 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3194291 /var/tmp/bperf.sock 00:35:13.844 10:07:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3194291 ']' 00:35:13.844 10:07:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:13.844 10:07:36 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:13.844 10:07:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.844 10:07:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:13.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:13.844 10:07:36 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:13.844 "subsystems": [ 00:35:13.844 { 00:35:13.844 "subsystem": "keyring", 00:35:13.844 "config": [ 00:35:13.844 { 00:35:13.844 "method": "keyring_file_add_key", 00:35:13.844 "params": { 00:35:13.844 "name": "key0", 00:35:13.844 "path": "/tmp/tmp.7gRuJs9Osk" 00:35:13.844 } 00:35:13.844 }, 00:35:13.844 { 00:35:13.844 "method": "keyring_file_add_key", 00:35:13.844 "params": { 00:35:13.844 "name": "key1", 00:35:13.844 "path": "/tmp/tmp.08Ww5SBgeI" 00:35:13.844 } 00:35:13.844 } 00:35:13.845 ] 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "subsystem": "iobuf", 00:35:13.845 "config": [ 00:35:13.845 { 00:35:13.845 "method": "iobuf_set_options", 00:35:13.845 "params": { 00:35:13.845 "small_pool_count": 8192, 00:35:13.845 "large_pool_count": 1024, 00:35:13.845 "small_bufsize": 8192, 00:35:13.845 "large_bufsize": 135168, 00:35:13.845 "enable_numa": false 00:35:13.845 } 00:35:13.845 } 00:35:13.845 ] 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "subsystem": "sock", 00:35:13.845 "config": [ 00:35:13.845 { 00:35:13.845 "method": "sock_set_default_impl", 00:35:13.845 "params": { 00:35:13.845 "impl_name": "posix" 00:35:13.845 } 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "method": "sock_impl_set_options", 00:35:13.845 "params": { 00:35:13.845 "impl_name": "ssl", 00:35:13.845 "recv_buf_size": 4096, 00:35:13.845 "send_buf_size": 4096, 00:35:13.845 "enable_recv_pipe": true, 00:35:13.845 "enable_quickack": false, 00:35:13.845 "enable_placement_id": 0, 00:35:13.845 "enable_zerocopy_send_server": true, 00:35:13.845 "enable_zerocopy_send_client": false, 00:35:13.845 "zerocopy_threshold": 0, 00:35:13.845 "tls_version": 0, 00:35:13.845 "enable_ktls": false 00:35:13.845 } 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "method": "sock_impl_set_options", 00:35:13.845 "params": { 00:35:13.845 "impl_name": "posix", 00:35:13.845 "recv_buf_size": 2097152, 00:35:13.845 "send_buf_size": 2097152, 00:35:13.845 "enable_recv_pipe": true, 00:35:13.845 "enable_quickack": false, 00:35:13.845 "enable_placement_id": 0, 00:35:13.845 "enable_zerocopy_send_server": true, 00:35:13.845 "enable_zerocopy_send_client": false, 00:35:13.845 "zerocopy_threshold": 0, 00:35:13.845 "tls_version": 0, 00:35:13.845 "enable_ktls": false 00:35:13.845 } 00:35:13.845 } 00:35:13.845 ] 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "subsystem": "vmd", 00:35:13.845 "config": [] 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "subsystem": "accel", 00:35:13.845 "config": [ 00:35:13.845 { 00:35:13.845 "method": "accel_set_options", 00:35:13.845 "params": { 00:35:13.845 "small_cache_size": 128, 00:35:13.845 "large_cache_size": 16, 00:35:13.845 "task_count": 2048, 00:35:13.845 "sequence_count": 2048, 00:35:13.845 "buf_count": 2048 00:35:13.845 } 00:35:13.845 } 00:35:13.845 ] 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "subsystem": "bdev", 00:35:13.845 "config": [ 00:35:13.845 { 00:35:13.845 "method": "bdev_set_options", 00:35:13.845 "params": { 00:35:13.845 "bdev_io_pool_size": 65535, 00:35:13.845 "bdev_io_cache_size": 256, 00:35:13.845 "bdev_auto_examine": true, 00:35:13.845 "iobuf_small_cache_size": 128, 00:35:13.845 "iobuf_large_cache_size": 16 00:35:13.845 } 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "method": "bdev_raid_set_options", 00:35:13.845 "params": { 00:35:13.845 "process_window_size_kb": 1024, 00:35:13.845 "process_max_bandwidth_mb_sec": 0 00:35:13.845 } 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "method": "bdev_iscsi_set_options", 00:35:13.845 "params": { 00:35:13.845 "timeout_sec": 30 00:35:13.845 } 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "method": "bdev_nvme_set_options", 00:35:13.845 "params": { 00:35:13.845 "action_on_timeout": "none", 00:35:13.845 "timeout_us": 0, 00:35:13.845 "timeout_admin_us": 0, 00:35:13.845 "keep_alive_timeout_ms": 10000, 00:35:13.845 "arbitration_burst": 0, 00:35:13.845 "low_priority_weight": 0, 00:35:13.845 "medium_priority_weight": 0, 00:35:13.845 "high_priority_weight": 0, 00:35:13.845 "nvme_adminq_poll_period_us": 10000, 00:35:13.845 "nvme_ioq_poll_period_us": 0, 00:35:13.845 "io_queue_requests": 512, 00:35:13.845 "delay_cmd_submit": true, 00:35:13.845 "transport_retry_count": 4, 00:35:13.845 "bdev_retry_count": 3, 00:35:13.845 "transport_ack_timeout": 0, 00:35:13.845 "ctrlr_loss_timeout_sec": 0, 00:35:13.845 "reconnect_delay_sec": 0, 00:35:13.845 "fast_io_fail_timeout_sec": 0, 00:35:13.845 "disable_auto_failback": false, 00:35:13.845 "generate_uuids": false, 00:35:13.845 "transport_tos": 0, 00:35:13.845 "nvme_error_stat": false, 00:35:13.845 "rdma_srq_size": 0, 00:35:13.845 "io_path_stat": false, 00:35:13.845 "allow_accel_sequence": false, 00:35:13.845 "rdma_max_cq_size": 0, 00:35:13.845 "rdma_cm_event_timeout_ms": 0, 00:35:13.845 "dhchap_digests": [ 00:35:13.845 "sha256", 00:35:13.845 "sha384", 00:35:13.845 "sha512" 00:35:13.845 ], 00:35:13.845 "dhchap_dhgroups": [ 00:35:13.845 "null", 00:35:13.845 "ffdhe2048", 00:35:13.845 "ffdhe3072", 00:35:13.845 "ffdhe4096", 00:35:13.845 "ffdhe6144", 00:35:13.845 "ffdhe8192" 00:35:13.845 ] 00:35:13.845 } 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "method": "bdev_nvme_attach_controller", 00:35:13.845 "params": { 00:35:13.845 "name": "nvme0", 00:35:13.845 "trtype": "TCP", 00:35:13.845 "adrfam": "IPv4", 00:35:13.845 "traddr": "127.0.0.1", 00:35:13.845 "trsvcid": "4420", 00:35:13.845 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.845 "prchk_reftag": false, 00:35:13.845 "prchk_guard": false, 00:35:13.845 "ctrlr_loss_timeout_sec": 0, 00:35:13.845 "reconnect_delay_sec": 0, 00:35:13.845 "fast_io_fail_timeout_sec": 0, 00:35:13.845 "psk": "key0", 00:35:13.845 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.845 "hdgst": false, 00:35:13.845 "ddgst": false, 00:35:13.845 "multipath": "multipath" 00:35:13.845 } 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "method": "bdev_nvme_set_hotplug", 00:35:13.845 "params": { 00:35:13.845 "period_us": 100000, 00:35:13.845 "enable": false 00:35:13.845 } 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "method": "bdev_wait_for_examine" 00:35:13.845 } 00:35:13.845 ] 00:35:13.845 }, 00:35:13.845 { 00:35:13.845 "subsystem": "nbd", 00:35:13.845 "config": [] 00:35:13.845 } 00:35:13.846 ] 00:35:13.846 }' 00:35:13.846 10:07:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.846 10:07:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:13.846 [2024-11-20 10:07:36.969894] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:35:13.846 [2024-11-20 10:07:36.969945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194291 ] 00:35:13.846 [2024-11-20 10:07:37.048064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.846 [2024-11-20 10:07:37.086348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.104 [2024-11-20 10:07:37.247005] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:14.670 10:07:37 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.670 10:07:37 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:14.670 10:07:37 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:14.670 10:07:37 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:14.670 10:07:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.929 10:07:38 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:14.929 10:07:38 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.929 10:07:38 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:14.929 10:07:38 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.929 10:07:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:15.186 10:07:38 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:15.186 10:07:38 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:15.186 10:07:38 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:15.186 10:07:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:15.444 10:07:38 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:15.444 10:07:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:15.444 10:07:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.7gRuJs9Osk /tmp/tmp.08Ww5SBgeI 00:35:15.444 10:07:38 keyring_file -- keyring/file.sh@20 -- # killprocess 3194291 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3194291 ']' 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3194291 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194291 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194291' 00:35:15.444 killing process with pid 3194291 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@973 -- # kill 3194291 00:35:15.444 Received shutdown signal, test time was about 1.000000 seconds 00:35:15.444 00:35:15.444 Latency(us) 00:35:15.444 [2024-11-20T09:07:38.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.444 [2024-11-20T09:07:38.776Z] =================================================================================================================== 00:35:15.444 [2024-11-20T09:07:38.776Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:15.444 10:07:38 keyring_file -- common/autotest_common.sh@978 -- # wait 3194291 00:35:15.703 10:07:38 keyring_file -- keyring/file.sh@21 -- # killprocess 3192767 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3192767 ']' 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3192767 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3192767 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3192767' 00:35:15.703 killing process with pid 3192767 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@973 -- # kill 3192767 00:35:15.703 10:07:38 keyring_file -- common/autotest_common.sh@978 -- # wait 3192767 00:35:15.961 00:35:15.961 real 0m11.840s 00:35:15.961 user 0m29.432s 00:35:15.961 sys 0m2.715s 00:35:15.961 10:07:39 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:15.961 10:07:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:15.961 ************************************ 00:35:15.961 END TEST keyring_file 00:35:15.961 ************************************ 00:35:15.961 10:07:39 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:15.961 10:07:39 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:15.961 10:07:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:15.961 10:07:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:15.961 10:07:39 -- common/autotest_common.sh@10 -- # set +x 00:35:15.961 ************************************ 00:35:15.961 START TEST keyring_linux 00:35:15.961 ************************************ 00:35:15.961 10:07:39 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:15.961 Joined session keyring: 280685180 00:35:16.221 * Looking for test storage... 00:35:16.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:16.221 10:07:39 keyring_linux -- common/autotest_common.sh@1702 -- # [[ y == y ]] 00:35:16.221 10:07:39 keyring_linux -- common/autotest_common.sh@1703 -- # lcov --version 00:35:16.221 10:07:39 keyring_linux -- common/autotest_common.sh@1703 -- # awk '{print $NF}' 00:35:16.221 10:07:39 keyring_linux -- common/autotest_common.sh@1703 -- # lt 1.15 2 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:16.221 10:07:39 keyring_linux -- common/autotest_common.sh@1704 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.221 10:07:39 keyring_linux -- common/autotest_common.sh@1716 -- # export 'LCOV_OPTS= 00:35:16.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.221 --rc genhtml_branch_coverage=1 00:35:16.221 --rc genhtml_function_coverage=1 00:35:16.221 --rc genhtml_legend=1 00:35:16.221 --rc geninfo_all_blocks=1 00:35:16.221 --rc geninfo_unexecuted_blocks=1 00:35:16.221 00:35:16.221 ' 00:35:16.221 10:07:39 keyring_linux -- common/autotest_common.sh@1716 -- # LCOV_OPTS=' 00:35:16.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.221 --rc genhtml_branch_coverage=1 00:35:16.221 --rc genhtml_function_coverage=1 00:35:16.221 --rc genhtml_legend=1 00:35:16.221 --rc geninfo_all_blocks=1 00:35:16.221 --rc geninfo_unexecuted_blocks=1 00:35:16.221 00:35:16.221 ' 00:35:16.221 10:07:39 keyring_linux -- common/autotest_common.sh@1717 -- # export 'LCOV=lcov 00:35:16.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.221 --rc genhtml_branch_coverage=1 00:35:16.221 --rc genhtml_function_coverage=1 00:35:16.221 --rc genhtml_legend=1 00:35:16.221 --rc geninfo_all_blocks=1 00:35:16.221 --rc geninfo_unexecuted_blocks=1 00:35:16.221 00:35:16.221 ' 00:35:16.221 10:07:39 keyring_linux -- common/autotest_common.sh@1717 -- # LCOV='lcov 00:35:16.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.221 --rc genhtml_branch_coverage=1 00:35:16.221 --rc genhtml_function_coverage=1 00:35:16.221 --rc genhtml_legend=1 00:35:16.221 --rc geninfo_all_blocks=1 00:35:16.221 --rc geninfo_unexecuted_blocks=1 00:35:16.221 00:35:16.221 ' 00:35:16.221 10:07:39 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:16.221 10:07:39 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.221 10:07:39 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.221 10:07:39 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.221 10:07:39 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.221 10:07:39 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.221 10:07:39 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:16.221 10:07:39 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.221 10:07:39 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:16.221 10:07:39 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:16.221 10:07:39 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:16.221 10:07:39 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:16.221 10:07:39 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:16.221 10:07:39 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:16.221 10:07:39 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:16.221 10:07:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:16.221 10:07:39 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:16.221 10:07:39 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:16.221 10:07:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:16.221 10:07:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:16.221 10:07:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:16.221 10:07:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:16.222 10:07:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:16.222 10:07:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:16.222 /tmp/:spdk-test:key0 00:35:16.222 10:07:39 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:16.222 10:07:39 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:16.222 10:07:39 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:16.222 10:07:39 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:16.222 10:07:39 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:16.222 10:07:39 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:16.222 10:07:39 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:16.222 10:07:39 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:16.481 10:07:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:16.481 10:07:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:16.481 /tmp/:spdk-test:key1 00:35:16.481 10:07:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3194853 00:35:16.481 10:07:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:16.481 10:07:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3194853 00:35:16.481 10:07:39 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3194853 ']' 00:35:16.481 10:07:39 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.481 10:07:39 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:16.481 10:07:39 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.481 10:07:39 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:16.481 10:07:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:16.481 [2024-11-20 10:07:39.618575] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:35:16.481 [2024-11-20 10:07:39.618625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194853 ] 00:35:16.481 [2024-11-20 10:07:39.693918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.481 [2024-11-20 10:07:39.736158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.743 10:07:39 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:16.743 10:07:39 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:16.743 10:07:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:16.743 10:07:39 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.743 10:07:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:16.743 [2024-11-20 10:07:39.948426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.743 null0 00:35:16.743 [2024-11-20 10:07:39.980477] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:16.743 [2024-11-20 10:07:39.980864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:16.743 10:07:39 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.743 10:07:40 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:16.743 775988497 00:35:16.743 10:07:40 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:16.743 199663798 00:35:16.743 10:07:40 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3194887 00:35:16.743 10:07:40 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:16.743 10:07:40 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3194887 /var/tmp/bperf.sock 00:35:16.743 10:07:40 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3194887 ']' 00:35:16.743 10:07:40 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:16.743 10:07:40 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:16.743 10:07:40 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:16.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:16.743 10:07:40 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:16.743 10:07:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:16.743 [2024-11-20 10:07:40.054001] Starting SPDK v25.01-pre git sha1 27a4d33d8 / DPDK 24.03.0 initialization... 00:35:16.743 [2024-11-20 10:07:40.054049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194887 ] 00:35:17.022 [2024-11-20 10:07:40.129390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.022 [2024-11-20 10:07:40.172511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.022 10:07:40 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:17.022 10:07:40 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:17.022 10:07:40 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:17.022 10:07:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:17.357 10:07:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:17.357 10:07:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:17.642 10:07:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:17.642 10:07:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:17.642 [2024-11-20 10:07:40.833568] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:17.642 nvme0n1 00:35:17.642 10:07:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:17.642 10:07:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:17.642 10:07:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:17.642 10:07:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:17.642 10:07:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:17.642 10:07:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.901 10:07:41 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:17.901 10:07:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:17.901 10:07:41 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:17.901 10:07:41 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:17.901 10:07:41 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:17.901 10:07:41 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:17.901 10:07:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.159 10:07:41 keyring_linux -- keyring/linux.sh@25 -- # sn=775988497 00:35:18.159 10:07:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:18.159 10:07:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:18.159 10:07:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 775988497 == \7\7\5\9\8\8\4\9\7 ]] 00:35:18.159 10:07:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 775988497 00:35:18.159 10:07:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:18.159 10:07:41 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:18.159 Running I/O for 1 seconds... 00:35:19.537 21198.00 IOPS, 82.80 MiB/s 00:35:19.537 Latency(us) 00:35:19.537 [2024-11-20T09:07:42.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.537 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:19.537 nvme0n1 : 1.01 21196.72 82.80 0.00 0.00 6018.48 1966.08 7522.39 00:35:19.537 [2024-11-20T09:07:42.869Z] =================================================================================================================== 00:35:19.537 [2024-11-20T09:07:42.869Z] Total : 21196.72 82.80 0.00 0.00 6018.48 1966.08 7522.39 00:35:19.537 { 00:35:19.537 "results": [ 00:35:19.537 { 00:35:19.537 "job": "nvme0n1", 00:35:19.537 "core_mask": "0x2", 00:35:19.537 "workload": "randread", 00:35:19.537 "status": "finished", 00:35:19.537 "queue_depth": 128, 00:35:19.537 "io_size": 4096, 00:35:19.537 "runtime": 1.006099, 00:35:19.537 "iops": 21196.721197416955, 00:35:19.537 "mibps": 82.79969217740998, 00:35:19.537 "io_failed": 0, 00:35:19.537 "io_timeout": 0, 00:35:19.537 "avg_latency_us": 6018.484019098956, 00:35:19.537 "min_latency_us": 1966.08, 00:35:19.537 "max_latency_us": 7522.393043478261 00:35:19.537 } 00:35:19.537 ], 00:35:19.537 "core_count": 1 00:35:19.537 } 00:35:19.537 10:07:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:19.537 10:07:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:19.537 10:07:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:19.537 10:07:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:19.537 10:07:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:19.537 10:07:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:19.537 10:07:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.537 10:07:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:19.797 10:07:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:19.797 10:07:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:19.797 10:07:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:19.797 10:07:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:19.797 10:07:42 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:19.797 10:07:42 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:19.797 10:07:42 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:19.797 10:07:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.797 10:07:42 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:19.797 10:07:42 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.797 10:07:42 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:19.797 10:07:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:19.797 [2024-11-20 10:07:43.069616] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:19.797 [2024-11-20 10:07:43.070106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8fa70 (107): Transport endpoint is not connected 00:35:19.797 [2024-11-20 10:07:43.071101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8fa70 (9): Bad file descriptor 00:35:19.797 [2024-11-20 10:07:43.072102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:19.797 [2024-11-20 10:07:43.072113] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:19.797 [2024-11-20 10:07:43.072120] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:19.797 [2024-11-20 10:07:43.072139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:19.797 request: 00:35:19.797 { 00:35:19.797 "name": "nvme0", 00:35:19.797 "trtype": "tcp", 00:35:19.797 "traddr": "127.0.0.1", 00:35:19.797 "adrfam": "ipv4", 00:35:19.797 "trsvcid": "4420", 00:35:19.797 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.797 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:19.797 "prchk_reftag": false, 00:35:19.797 "prchk_guard": false, 00:35:19.797 "hdgst": false, 00:35:19.797 "ddgst": false, 00:35:19.797 "psk": ":spdk-test:key1", 00:35:19.797 "allow_unrecognized_csi": false, 00:35:19.797 "method": "bdev_nvme_attach_controller", 00:35:19.797 "req_id": 1 00:35:19.797 } 00:35:19.797 Got JSON-RPC error response 00:35:19.797 response: 00:35:19.797 { 00:35:19.797 "code": -5, 00:35:19.797 "message": "Input/output error" 00:35:19.797 } 00:35:19.797 10:07:43 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:19.797 10:07:43 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.797 10:07:43 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.797 10:07:43 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@33 -- # sn=775988497 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 775988497 00:35:19.797 1 links removed 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@33 -- # sn=199663798 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 199663798 00:35:19.797 1 links removed 00:35:19.797 10:07:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3194887 00:35:19.797 10:07:43 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3194887 ']' 00:35:19.797 10:07:43 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3194887 00:35:19.797 10:07:43 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:19.797 10:07:43 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.797 10:07:43 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194887 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194887' 00:35:20.057 killing process with pid 3194887 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@973 -- # kill 3194887 00:35:20.057 Received shutdown signal, test time was about 1.000000 seconds 00:35:20.057 00:35:20.057 Latency(us) 00:35:20.057 [2024-11-20T09:07:43.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.057 [2024-11-20T09:07:43.389Z] =================================================================================================================== 00:35:20.057 [2024-11-20T09:07:43.389Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@978 -- # wait 3194887 00:35:20.057 10:07:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3194853 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3194853 ']' 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3194853 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194853 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194853' 00:35:20.057 killing process with pid 3194853 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@973 -- # kill 3194853 00:35:20.057 10:07:43 keyring_linux -- common/autotest_common.sh@978 -- # wait 3194853 00:35:20.625 00:35:20.625 real 0m4.405s 00:35:20.625 user 0m8.370s 00:35:20.625 sys 0m1.425s 00:35:20.625 10:07:43 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.625 10:07:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:20.625 ************************************ 00:35:20.625 END TEST keyring_linux 00:35:20.625 ************************************ 00:35:20.625 10:07:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:20.625 10:07:43 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:20.625 10:07:43 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:20.625 10:07:43 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:20.625 10:07:43 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:20.625 10:07:43 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:20.626 10:07:43 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:20.626 10:07:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:20.626 10:07:43 -- common/autotest_common.sh@10 -- # set +x 00:35:20.626 10:07:43 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:20.626 10:07:43 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:20.626 10:07:43 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:20.626 10:07:43 -- common/autotest_common.sh@10 -- # set +x 00:35:25.897 INFO: APP EXITING 00:35:25.897 INFO: killing all VMs 00:35:25.897 INFO: killing vhost app 00:35:25.897 INFO: EXIT DONE 00:35:28.433 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:28.433 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:28.433 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:31.724 Cleaning 00:35:31.724 Removing: /var/run/dpdk/spdk0/config 00:35:31.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:31.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:31.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:31.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:31.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:31.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:31.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:31.724 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:31.724 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:31.724 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:31.724 Removing: /var/run/dpdk/spdk1/config 00:35:31.724 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:31.724 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:31.724 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:31.724 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:31.724 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:31.724 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:31.724 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:31.724 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:31.724 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:31.724 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:31.724 Removing: /var/run/dpdk/spdk2/config 00:35:31.724 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:31.724 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:31.724 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:31.724 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:31.724 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:31.724 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:31.724 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:31.724 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:31.724 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:31.724 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:31.724 Removing: /var/run/dpdk/spdk3/config 00:35:31.724 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:31.724 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:31.724 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:31.724 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:31.724 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:31.724 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:31.724 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:31.724 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:31.724 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:31.724 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:31.724 Removing: /var/run/dpdk/spdk4/config 00:35:31.724 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:31.724 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:31.724 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:31.724 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:31.724 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:31.724 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:31.724 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:31.724 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:31.724 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:31.724 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:31.724 Removing: /dev/shm/bdev_svc_trace.1 00:35:31.724 Removing: /dev/shm/nvmf_trace.0 00:35:31.724 Removing: /dev/shm/spdk_tgt_trace.pid2715379 00:35:31.724 Removing: /var/run/dpdk/spdk0 00:35:31.724 Removing: /var/run/dpdk/spdk1 00:35:31.724 Removing: /var/run/dpdk/spdk2 00:35:31.724 Removing: /var/run/dpdk/spdk3 00:35:31.724 Removing: /var/run/dpdk/spdk4 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2713233 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2714291 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2715379 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2716014 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2716958 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2716978 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2718074 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2718302 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2718508 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2720560 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2721844 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2722135 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2722422 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2722729 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2723019 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2723250 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2723420 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2723735 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2724557 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2727553 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2727811 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2728017 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2728074 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2728434 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2728588 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2728877 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2729091 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2729352 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2729367 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2729619 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2729740 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2730199 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2730450 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2730755 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2734472 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2738938 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2748970 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2749660 00:35:31.724 Removing: /var/run/dpdk/spdk_pid2753960 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2754413 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2758694 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2765095 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2767711 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2777912 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2786842 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2788675 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2789603 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2806399 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2810321 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2856991 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2862306 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2868467 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2875001 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2875085 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2875873 00:35:31.725 Removing: /var/run/dpdk/spdk_pid2876785 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2877696 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2878170 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2878225 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2878526 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2878631 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2878636 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2879554 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2880465 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2881281 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2881853 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2881856 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2882090 00:35:31.984 Removing: /var/run/dpdk/spdk_pid2883187 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2884204 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2892412 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2921749 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2926252 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2927858 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2929698 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2929719 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2929952 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2930094 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2930544 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2932308 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2933145 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2933570 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2936300 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2936694 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2937400 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2941606 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2947018 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2947019 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2947021 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2950859 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2959203 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2963007 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2969226 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2970385 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2971873 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2973416 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2977934 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2982560 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2986910 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2994365 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2994369 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2999084 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2999318 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2999547 00:35:31.985 Removing: /var/run/dpdk/spdk_pid2999942 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3000010 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3004396 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3004908 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3009409 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3011949 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3017339 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3022888 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3031527 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3039070 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3039129 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3057765 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3058247 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3058929 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3059403 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3060143 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3060623 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3061203 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3061782 00:35:31.985 Removing: /var/run/dpdk/spdk_pid3065817 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3066077 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3072113 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3072352 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3077640 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3082139 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3092281 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3092814 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3097065 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3097318 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3101557 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3107204 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3109916 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3119980 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3128995 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3130998 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3131914 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3147824 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3151645 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3154460 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3162272 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3162289 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3167330 00:35:32.243 Removing: /var/run/dpdk/spdk_pid3169293 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3171258 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3172428 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3174792 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3175855 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3184607 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3185074 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3185736 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3188015 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3188480 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3188949 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3192767 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3192782 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3194291 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3194853 00:35:32.244 Removing: /var/run/dpdk/spdk_pid3194887 00:35:32.244 Clean 00:35:32.244 10:07:55 -- common/autotest_common.sh@1453 -- # return 0 00:35:32.244 10:07:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:32.244 10:07:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.244 10:07:55 -- common/autotest_common.sh@10 -- # set +x 00:35:32.244 10:07:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:32.244 10:07:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.244 10:07:55 -- common/autotest_common.sh@10 -- # set +x 00:35:32.502 10:07:55 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:32.502 10:07:55 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:32.502 10:07:55 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:32.502 10:07:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:32.502 10:07:55 -- spdk/autotest.sh@398 -- # hostname 00:35:32.502 10:07:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:32.502 geninfo: WARNING: invalid characters removed from testname! 00:35:54.438 10:08:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:56.972 10:08:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:58.877 10:08:21 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:00.258 10:08:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:02.167 10:08:25 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:04.072 10:08:27 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:05.977 10:08:29 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:05.977 10:08:29 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:05.977 10:08:29 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:05.977 10:08:29 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:05.977 10:08:29 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:05.977 10:08:29 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:05.977 + [[ -n 2636319 ]] 00:36:05.977 + sudo kill 2636319 00:36:05.987 [Pipeline] } 00:36:06.002 [Pipeline] // stage 00:36:06.008 [Pipeline] } 00:36:06.032 [Pipeline] // timeout 00:36:06.050 [Pipeline] } 00:36:06.081 [Pipeline] // catchError 00:36:06.084 [Pipeline] } 00:36:06.093 [Pipeline] // wrap 00:36:06.096 [Pipeline] } 00:36:06.103 [Pipeline] // catchError 00:36:06.109 [Pipeline] stage 00:36:06.110 [Pipeline] { (Epilogue) 00:36:06.119 [Pipeline] catchError 00:36:06.120 [Pipeline] { 00:36:06.128 [Pipeline] echo 00:36:06.129 Cleanup processes 00:36:06.133 [Pipeline] sh 00:36:06.414 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:06.415 3205528 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:06.430 [Pipeline] sh 00:36:06.716 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:06.716 ++ grep -v 'sudo pgrep' 00:36:06.716 ++ awk '{print $1}' 00:36:06.716 + sudo kill -9 00:36:06.716 + true 00:36:06.728 [Pipeline] sh 00:36:07.013 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:19.237 [Pipeline] sh 00:36:19.522 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:19.522 Artifacts sizes are good 00:36:19.537 [Pipeline] archiveArtifacts 00:36:19.545 Archiving artifacts 00:36:19.717 [Pipeline] sh 00:36:20.053 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:20.069 [Pipeline] cleanWs 00:36:20.080 [WS-CLEANUP] Deleting project workspace... 00:36:20.080 [WS-CLEANUP] Deferred wipeout is used... 00:36:20.087 [WS-CLEANUP] done 00:36:20.089 [Pipeline] } 00:36:20.106 [Pipeline] // catchError 00:36:20.119 [Pipeline] sh 00:36:20.402 + logger -p user.info -t JENKINS-CI 00:36:20.411 [Pipeline] } 00:36:20.424 [Pipeline] // stage 00:36:20.430 [Pipeline] } 00:36:20.444 [Pipeline] // node 00:36:20.451 [Pipeline] End of Pipeline 00:36:20.488 Finished: SUCCESS